<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Times]]></title><description><![CDATA[a pulse on technology markets]]></description><link>https://www.thetimes.blog</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 01:44:02 GMT</lastBuildDate><atom:link href="https://www.thetimes.blog/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Evan O'Donnell]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thetimesblog@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thetimesblog@substack.com]]></itunes:email><itunes:name><![CDATA[Evan O'Donnell]]></itunes:name></itunes:owner><itunes:author><![CDATA[Evan O'Donnell]]></itunes:author><googleplay:owner><![CDATA[thetimesblog@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thetimesblog@substack.com]]></googleplay:email><googleplay:author><![CDATA[Evan O'Donnell]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Teaching Metal to Move]]></title><description><![CDATA[A Conversation with Flexion CEO Nikita Rudin]]></description><link>https://www.thetimes.blog/p/teaching-metal-to-move</link><guid isPermaLink="false">https://www.thetimes.blog/p/teaching-metal-to-move</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Tue, 05 May 2026 23:30:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MtxE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the central challenges in building AI is accounting for the idiosyncrasies of human behavior.</p><p>LLMs require massive datasets because language is so varied and context-specific. Coding agents have to navigate the individual quirks of how engineers build software. Autonomous cars must learn to anticipate human errors and driving patterns.</p><p>That&#8217;s why synthetic environments only get you so far in fine-tuning LLMs. <strong>When human behavior is a key variable in the system, it&#8217;s hard to simulate reality.</strong></p><p>Robotics is different. The hardest problems are not about humans but about <em>physics</em>. How a leg balances against gravity, a hand grips a wire cable, a body recovers from a stumble.</p><p>Unlike human behavior, physics is dictated by <em>laws</em>. Gravity, friction, and momentum can all be modeled and derived from equations.</p><p><strong>That invariance is why simulated environments can go much further in training robots than in training LLMs.</strong></p><p><a href="https://scholar.google.com/citations?user=1kKJYVIAAAAJ&amp;hl=fr">Nikita Rudin</a> is building a company off this insight.</p><p>Nikita is the CEO and co-founder at <a href="https://flexion.ai/">Flexion</a>, a young startup building AI software for humanoid<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> robots. His team is developing a general-purpose &#8220;brain&#8221; that can be fine-tuned and customized in simulated environments, then deployed into various different robot types across their customer network.</p><p>Before Flexion, Nikita and his co-founder <a href="https://x.com/HoellerDavid">David</a> received their PhDs in robotics at ETH Zurich and worked together at NVIDIA.</p><p>Nikita also co-authored the original Isaac Gym<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> <a href="https://arxiv.org/pdf/2108.10470">paper</a>, the breakthrough research from NVIDIA that established GPU-accelerated simulation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> as a viable way to train robots at scale.</p><p>Nikita and I talked about how robots are being built today, his thoughts on how the market will mature, and why the underlying tools in robotics are advancing so quickly.</p><p>He has a plasticity in his thinking that I love seeing in founders. Technical depth paired with creativity. A clear point of view earned by real experience, but held with genuine curiosity.</p><p>He even offered a prediction for when we will start to see lots more robots in the wild. (Sorry <a href="https://www.youtube.com/watch?v=KkYC1Vtj4NU">Melania</a>, the robots are not here &#8211; at least not yet!)</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MtxE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MtxE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 424w, https://substackcdn.com/image/fetch/$s_!MtxE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 848w, https://substackcdn.com/image/fetch/$s_!MtxE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 1272w, https://substackcdn.com/image/fetch/$s_!MtxE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MtxE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png" width="402" height="235.1046931407942" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:324,&quot;width&quot;:554,&quot;resizeWidth&quot;:402,&quot;bytes&quot;:100331,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/196546799?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MtxE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 424w, https://substackcdn.com/image/fetch/$s_!MtxE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 848w, https://substackcdn.com/image/fetch/$s_!MtxE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 1272w, https://substackcdn.com/image/fetch/$s_!MtxE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe629d99a-3ee2-40f4-9c12-aed8daad87cb_554x324.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><h3><strong>01 |</strong>  Days versus years</h3><p><strong>EO: You&#8217;ve been in robotics a while, before all the current hype. I&#8217;d love to hear your story and how that connects to the thesis behind Flexion.</strong></p><p>NR: I was doing my PhD at ETH Zurich in reinforcement learning (RL) for robotics and also working part-time at NVIDIA. This was before AI took off, back when NVIDIA&#8217;s GPUs were mostly powering video games.</p><p>At NVIDIA, my co-founder <a href="https://x.com/HoellerDavid">David</a> and I were on the team building Isaac Gym, Isaac Sim, and Isaac Lab.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> These are NVIDIA&#8217;s open-source simulation tools, which have since become foundational in the field of robotics learning.</p><p>During this time, we saw how fast the technology was moving.</p><p>In 2020, getting a quadruped, a four-legged robot, to take a few steps without falling was a big deal. By 2024, the same quadruped could jump and climb through collapsed buildings and piles of gravel, terrain I&#8217;d struggle to walk across myself.</p><p><strong>Impressive, in four years&#8230;</strong></p><p>The progress was amazing.</p><p>Over time, I got involved in some humanoid projects. If you remember Jensen&#8217;s NVIDIA keynote two years ago, where he first showcased robots on stage, David and I were heavily involved in everything shown then.</p><p>Working on humanoids taught us two things.</p><p>First, the same RL approach works across different robot form factors, whether you&#8217;re working with a quadruped or humanoid.</p><p>Second, and more importantly, policies<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> learned on one robot transfer to others very efficiently.</p><p><strong>What makes that transfer so efficient? I&#8217;d assume, incorrectly it sounds like, that in robotics, hardware and software are coupled too tightly for this methodology to transfer so easily.</strong></p><p>Here&#8217;s how our pipeline works.</p><p>Each robot enters our digital simulation environment as a spec file, describing the hardware and its kinematics<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> in a standardized format.</p><p>Inside the simulation, we train the robot&#8217;s control policy<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> with reinforcement learning. The robot attempts a task many times, gets rewarded for progress, and the neural network gradually learns how to control the body.</p><p>Once trained, we deploy the policy to the physical robot, where it sends commands to the motors to navigate the real world.</p><p><strong>So essentially, you&#8217;re feeding the simulator a description of the body, training the brain in a simulated environment to operate that body, and then dropping the brain into a physical system.</strong></p><p>That&#8217;s right. And because the initial spec files are standardized, plugging in a new robot form is relatively straightforward. Also, everything around training the &#8220;brain&#8221; &#8211; the simulator, the general RL methods, our pipelines &#8211; is reusable.</p><p>The motor interfaces, which let us deploy a trained policy back into a robot, aren&#8217;t fully standardized across the industry yet. But they&#8217;re close enough that we can usually deploy without much friction.</p><p>In other words, the standards and system boundaries in robotics are pretty well defined. And that lets us deploy our training methods across many different robot types.</p><p><strong>So you were observing the generality and scalability of these methods. What gave you that extra kick to start something from scratch?</strong></p><p>Very quickly, we saw that we could run our software on other people&#8217;s robots, sometimes without ever seeing those robots in-person. And we could often do so more efficiently than their internal teams.</p><p>That gave us the conviction to build a company around this approach.</p><p><strong>Can you quantify how much more efficiently?</strong></p><p>Days versus years.</p><p><strong>Okay, well that&#8217;s a compelling reason to start something!</strong></p><p>Yeah. We developed this belief that simulation together with reinforcement learning should be used as much as possible.</p><p>I finished my PhD, we left NVIDIA, and started Flexion.</p><p><strong>Give us a snapshot of the company today. Where are you in the R&amp;D cycle?</strong></p><p>We&#8217;re just a little over a year old. We have 45 people in Zurich, and just opened our SF office. We have multiple customers. Some are robotics companies integrating our software. Some are big industrial players deploying robots in their factories and warehouses.</p><p>I&#8217;d say we are as deployed as it gets in robotics, which, to be honest, is still very early.</p><div><hr></div><h3><strong>02 |</strong>  Becoming Android for robotics</h3><p><strong>One assumption behind Flexion is you can be competitive as the &#8220;Android for robotics&#8221; &#8211; a horizontal, hardware-agnostic system that can be trained to run on any humanoid.</strong></p><p><strong>A lot of smart people are making the opposite bet. Tesla and Figure, for example, are both vertically integrated. Do you think they&#8217;re missing something?</strong></p><p>I don&#8217;t think they&#8217;re wrong. Vertical integration lets you control your own destiny. But <em>only</em> if you can do it more efficiently than partnering.</p><p>The market will probably support one, maybe two, vertical players. Tesla will probably be one of them.</p><p>But beyond that, you&#8217;ll have a huge ecosystem of others that will own their section of the supply chain. Because combining hardware and software, being excellent at every part of the stack, and then scaling it to thousands of robots, is extremely hard. Being world-class in one part of the chain is easier.</p><p>The self-driving car market is a good analog. Early on, everyone was vertically integrated. Most died. A few big companies survived. And now, traditional OEMs are increasingly buying third-party software from companies like Wayve and Mobileye to power their vehicles.</p><p>We&#8217;re seeing the same thing in robotics. Everyone started wanting to be vertically integrated. Now the ecosystem is shifting away from that.</p><p>That&#8217;s what our customers are doing. And you see companies like Boston Dynamics and Apptronik now partnering with DeepMind.</p><p>It makes sense. Otherwise, everyone has to relearn the same hard lessons of building AI software for robots from scratch.</p><p><strong>I see. So unless you have extraordinary scale and capital, it&#8217;s too hard to own everything.</strong></p><p>Absolutely. I think vertical integration is a story <em>investors</em> like to hear. Because it&#8217;s easy to envision the moat. But technically, in the majority of cases, it makes more sense to go horizontal.</p><p><strong>Let&#8217;s assume, for a second, you have unlimited capital. Is there a technical case for why vertical integration will usually underperform?</strong></p><p>Well, talent is still a limited resource. There are maybe 200 people in the world that can do this kind of work. Throwing money at it will not solve the problem.</p><p><strong>It rarely does. But fair &#8211; let&#8217;s assume unlimited resources. Capital, talent, time.</strong></p><p><strong>I&#8217;m trying to isolate if there&#8217;s a </strong><em><strong>technical</strong></em><strong> rationale for this horizontal strategy. An edge you, your suppliers, and your customers have that vertical players can&#8217;t recreate.</strong></p><p>Technically speaking, the limiting factor in robotics is data, and specifically, a diversity of data. You need to constantly collect new types of data and train on it to maintain a performance edge.</p><p>If you&#8217;re vertically integrated, you only learn from your own network. By partnering with other providers, you can benefit from a much wider network of robots and partners, all collecting more data and learning about what is working.</p><p><strong>What&#8217;s a tangible way you&#8217;ve seen that play out?</strong></p><p>There are three main types of actuators and gearboxes.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> These are the hardware components that move a robot&#8217;s joints. Every robotics company has to bet on one of the three types for their product.</p><p>Because our software runs on all three, we know the real trade-offs in production. We have the data, and can advise our customers on the best option for their specific use case.</p><div><hr></div><h3><strong>03 |</strong>  Most people get simulation wrong</h3><p><strong>At Flexion, you&#8217;ve made a big bet on simulation. What do you think about the sim-to-real gap, the argument that training in simulated environments won&#8217;t get us anywhere near commercial deployment?</strong></p><p>We believe the point where simulation hits diminishing returns is much further away than most people think.</p><p>Throughout my PhD, everyone said simulators were limited. But we kept pushing the frontier with them, proving them wrong. We&#8217;ll keep doing that in the future.</p><p><strong>Why is simulation so effective? I spend most of my time in software, where LLMs tend to improve fastest when they&#8217;re tuned on real production data. Synthetic environments only take you so far. That&#8217;s not the same in robotics?</strong></p><p>Robotics is different. People make this same false comparison between robotics and self-driving cars too, but the analogy doesn&#8217;t hold.</p><p>In software applications and self-driving cars, anomalous behavior comes from <em>humans</em>.</p><p>In robotics, we&#8217;re solving actual, physical interactions between robots and objects &#8211; holding a certain weight, precise placement, things like that. Those are easier to simulate than human behavior.</p><p>At some point, simulated training will get advanced enough that we&#8217;ll need real-world data to push the frontier. But today, that&#8217;s not the bottleneck.</p><p><strong>At what point do the &#8220;pessimists&#8221; think simulation will hit a wall?</strong></p><p>Right now, there is a narrative that simulated RL can&#8217;t work for manipulation, how a robot handles and interacts with objects.</p><p>I agree manipulation is harder to train than locomotion.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> You&#8217;re asking the robot to interact with exotic materials &#8211; soft objects, cables, and liquids.</p><p><em>But simulation as a field is advancing very quickly.</em> You can now train on those materials in a simulated environment in a way you couldn&#8217;t just a few years ago.</p><p><strong>Will simulation continue to advance, or will progress start to decay?</strong></p><p>It will keep advancing.</p><p>First off, simulation isn&#8217;t really a data problem. Many AI advances hit diminishing returns as datasets grow larger and harder to collect. But GPU-accelerated simulators aren&#8217;t trained on data &#8212; they&#8217;re just math and physics, encoded to run efficiently on GPUs. Companies like Google and NVIDIA are investing heavily to make these simulators run faster and capture the physics of the real world more accurately. That investment from Big Tech will push things forward.</p><p>Second, coding agents are a big accelerant. Good simulations need many intricate scenarios, with lots of variation. Building those used to be manual. Now, Claude Code can build whole simulation assets from scratch, interpolating from just a few examples.</p><p><strong>That&#8217;s pretty amazing.</strong></p><p>And third, there&#8217;s a lot of algorithmic progress happening, which is where we spend a lot of time at Flexion.</p><p>For example, RL works by trial and error. The robot tries random actions, stumbles onto solutions, and gets rewarded when they work. But manipulation requires a very precise sequence of actions. The space of &#8220;right&#8221; sequences is so narrow that you can burn a lot of calories in training and never land on one that works.</p><p>We solve this by bootstrapping<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> the RL process with human demonstrations &#8211; either a person remote-controlling the robot through the task a few times, or video of a human performing the task from their own point of view.</p><p><strong>In tuning LLMs, data quality often trumps data quantity. From what you&#8217;re saying, it seems like the same is true in robotics.</strong></p><p>Yes. In coding and software, a few high quality examples plus RL beats large quantities of mediocre data. That lesson carries over into robotics.</p><div><hr></div><h3><strong>04 |</strong>  NVIDIA&#8217;s positioning</h3><p><strong>You&#8217;re pretty deep in the NVIDIA ecosystem. You and your co-founder worked there. They invested in Flexion. How do you think NVIDIA is positioning itself within robotics?</strong></p><p>I can&#8217;t speak directly for the company. But remember, NVIDIA&#8217;s business is selling GPUs.</p><p>Their strategy is to create new ecosystems that need GPUs. They open-source just enough tooling to seed many companies on top, but not enough to solve the whole problem, otherwise they&#8217;d cannibalize demand for their own chips.</p><p>They ran this strategy successfully with LLMs. They&#8217;re doing it with self-driving cars. And now they&#8217;re starting something similar in robotics.</p><p>So far, they haven&#8217;t pushed toward actually deploying robots themselves.</p><p><strong>So NVIDIA is seeding the ecosystem and pushing developers onto their chips. What role do you want </strong><em><strong>Flexion</strong></em><strong> to fill in the ecosystem?</strong></p><p>Our goal is to standardize everything that goes into helping hardware manufacturers bring stable products to market.</p><p>We want to do that by bridging the gap between all the open-source research and tooling in the space, and actually deploying robots that do real work. NVIDIA is one player building open-source tools, but there&#8217;s an amazing ecosystem of researchers and academics pushing new methods all the time.</p><p>In practice, we end up building a lot of our own tooling in-house to support our customers, but we do try to leverage as much public work as we can.</p><p><strong>What&#8217;s the biggest gap in the research right now? Is there a tool or capability that would remove a major R&amp;D bottleneck?</strong></p><p>The creation of what we call <em>digital twins</em>. Not a random simulated environment, but one that resembles a specific place. For example, if we need to train robots to operate in a specific factory, we need to create a replica of that in the simulator, in the digital world.</p><p>You can pay someone to build these assets today, but they&#8217;re very expensive. Coding agents are starting to build them for us, but it&#8217;s not zero-effort yet. Getting there will take time.</p><p><strong>Where specifically do coding agents still struggle?</strong></p><p>It&#8217;s hard to pinpoint. But wherever there&#8217;s more complexity and specificity.</p><p><strong>Doesn&#8217;t that contradict the sim-to-real conversation we were having earlier? Isn&#8217;t the implication that simulation actually </strong><em><strong>isn&#8217;t</strong></em><strong> enough, that what we really need is more real-world data to get these robots ready for prime time?</strong></p><p>That&#8217;s interesting. The way I think about it is &#8211; we should be using real-world data to improve the <em>simulation</em> <em>environment</em>, not to train the robot directly. Because if we can get the learning to happen in simulation, that is orders of magnitude more efficient and scalable than conducting training in the real world.</p><p>Simulation capabilities are advancing quickly enough that the cost-benefit makes it clear that&#8217;s where we should focus.</p><div><hr></div><h3><strong>05 | </strong> To adapt, or not to adapt?</h3><p><strong>One of the challenges in robotics is real-time adaptation. The real world is messy. Should humanoids be programmed to course-correct on their own when they make a mistake? How do you think about that problem?</strong></p><p>I have a controversial take. I&#8217;m not a fan of robots adapting live without human supervision.</p><p>When you deploy robots commercially, you have to think about risk. You have to certify these systems are safe, that they are statistically stable. If robots are adapting on their own, that&#8217;s very hard to guarantee. I don&#8217;t like the idea of robots changing their brain in the wild.</p><p>For a while, I expect robots will still require manual retraining around failure cases. We can recreate those failure cases in simulation, retrain on a combination of synthetic and real data, and run validation before deploying the update to thousands of robots.</p><p><strong>But the humanoids you work with are general-purpose. They&#8217;re not specialized to run one task on repeat. I assume you can&#8217;t pre-program </strong><em><strong>every</strong></em><strong> permutation of reality a robot will encounter, even in a contained environment.</strong></p><p><strong>Doesn&#8217;t that imply a </strong><em><strong>greater</strong></em><strong> need to allow for adaptation?</strong></p><p>That&#8217;s a good and hard question to answer.</p><p>Over time, as coding agents and LLMs improve, the whole loop &#8211; deploy, see what fails, have an engineer diagnose it, change your training code, retrain, redeploy &#8211; will become more automatic.</p><p>A human supervising that automated loop is probably safer and ultimately more efficient than letting robots adapt on their own. But only if we can get it more automated.</p><p>There&#8217;s also nuance around <em>which part</em> of the system you let adapt.</p><p>Our software has three layers &#8211; high-level understanding, local motion planning, and actual execution. I wouldn&#8217;t let a robot independently change the execution layer. That could get very unpredictable.</p><p>But letting it adapt the high-level strategy, like <em>how</em> it opens a door or moves around an object after one failed attempt, is much easier and safer.</p><p>For example, if a robot receives the task &#8220;make me a coffee,&#8221; the robot then has to plan &#8211; go to the kitchen, find the coffee machine. If it sees there&#8217;s no coffee machine, it needs to reason its way to a new plan. That kind of adaptation is safe.</p><p>What we don&#8217;t want is the robot independently changing how it physically moves.</p><p><strong>So different parts of the stack get different levels of autonomy.</strong></p><p>Exactly. And the high-level layer is primarily trained on real-world and human data, more like an LLM-style reasoning system. So we sometimes train that layer in simulation, but we don&#8217;t always have to.</p><div><hr></div><h3><strong>06 |</strong>  The year of the robot</h3><p><strong>When do humanoids get deployed at scale? Two years out, five, ten? I assume we&#8217;ll have commercial deployment before consumer&#8230;</strong></p><p>Yes, I think commercial will come first.</p><p>Right now, deployment isn&#8217;t economical. But by the end of the year, I expect Flexion and others will prove that robots can deliver commercial ROI on specific tasks.</p><p>By 2027, you&#8217;ll see robots doing real work. They&#8217;ll be constrained to one specific task, trained to operate in one specific setting. But they will be fully deployed.</p><p><strong>How do customers think about ROI?</strong></p><p>It will be benchmarked to salary. Take a task done by three people across multiple shifts, tally what you&#8217;d pay them over a year for that work, and compare that to the amortized cost of a robot doing the same level of output that year.</p><p>Today, you can&#8217;t just bring a robot in a box and have it work. You need five engineers looking over its shoulder. We have to strip out the engineering oversight for robots to make economic sense for customers.</p><p><strong>OK, so 2027 &#8211; The Year of the Robot. That&#8217;s a good prediction to end on.</strong></p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>A <em>humanoid</em> is a robot built in the general shape of a human, designed to operate in environments and with tools made for people.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>Isaac Gym</em> was NVIDIA&#8217;s original, open-source reinforcement learning environment for robots. Released in 2021, it was the first framework to train complex robot policies entirely on a single GPU.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>GPU-accelerated simulation </em>is a technique that uses GPU chips to run thousands of virtual robots in parallel, each in its own simulated physical environment. By running many simulations at once, researchers can compress what would otherwise be years of trial-and-error training into hours.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>Isaac Sim</em> is a photorealistic robotics simulator, meaning it renders virtual worlds with realistic lighting, textures, and physics so that what a robot experiences in simulation matches reality. <em>Isaac Lab </em>is the open-source framework that replaced Isaac Gym as the standard environment for training.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>A <em>policy</em> is the robot&#8217;s decision-making engine, typically a neural network that decides what the robot should do and how it should move, given its sensory inputs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><em>Kinematics</em> describes the robot&#8217;s physical structure, how its  joints and links move in space, which forms the basis for motion planning and control.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>In reinforcement learning, the <em>control policy</em> refers to the learned rules for translating sensor inputs into motor commands.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><em>Actuators</em> are the motors that drive a robot&#8217;s joints. <em>Gearboxes</em> pair with them to convert fast, weak motor rotation into slow, powerful motion for walking and lifting.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><em>Manipulation</em> is how a robot interacts with objects &#8211; grasping, lifting, placing, assembling. <em>Locomotion</em> is how a robot moves through the world &#8211; walking, running, climbing stairs, balancing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p><em>Bootstrapping</em> means kickstarting the reinforcement learning process with a small amount of high-quality data &#8211; in this case, human demonstrations.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Three Theses]]></title><description><![CDATA[Venture capital is crowding into a handful of names.]]></description><link>https://www.thetimes.blog/p/three-theses</link><guid isPermaLink="false">https://www.thetimes.blog/p/three-theses</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Tue, 28 Apr 2026 01:05:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8AVV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Venture capital is crowding into a handful of names. But the innovation potential is wider than this trend suggests.</p><p>This post offers an alternative framing of the next phase of software, and three startup ideas that follow:</p><p><strong>A.</strong>  AI-native e-commerce, rebuilt around a single intelligence layer</p><p><strong>B.</strong>  Dev tooling that translates product taste into code</p><p><strong>C.</strong>  Data and risk layer for physical AI</p><p>If you&#8217;re building in one of these areas, or something adjacent, I&#8217;d love to connect.</p><div><hr></div><h3><strong>01 |</strong>  The fog of AI</h3><p>We&#8217;re three or four years into the AI cycle, and capital has rushed in with unusual intensity.</p><p>For the past couple of decades, the top 5 U.S. venture deals in any given year typically represented anywhere between 2-10% of total capital invested in the asset class.</p><p>In 2025, that number climbed to 24% of the total.</p><p><strong>So far in 2026 &#8211; nearly 70%(!)</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8AVV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8AVV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 424w, https://substackcdn.com/image/fetch/$s_!8AVV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 848w, https://substackcdn.com/image/fetch/$s_!8AVV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 1272w, https://substackcdn.com/image/fetch/$s_!8AVV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8AVV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png" width="1456" height="751" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:751,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1016360,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/195675525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8AVV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 424w, https://substackcdn.com/image/fetch/$s_!8AVV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 848w, https://substackcdn.com/image/fetch/$s_!8AVV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 1272w, https://substackcdn.com/image/fetch/$s_!8AVV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0aef397-a758-4883-935b-8be1e7bb7b3f_6696x3452.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To put that in context... in real dollars, early AI companies have already surpassed the total <em>lifetime</em> equity raised by Google, Facebook, Apple, Amazon, Microsoft, Nvidia, Tesla, and 21 other leading tech businesses &#8211; <em>combined.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l8Am!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l8Am!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 424w, https://substackcdn.com/image/fetch/$s_!l8Am!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 848w, https://substackcdn.com/image/fetch/$s_!l8Am!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 1272w, https://substackcdn.com/image/fetch/$s_!l8Am!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l8Am!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png" width="1456" height="1001" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1001,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:943605,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/195675525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l8Am!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 424w, https://substackcdn.com/image/fetch/$s_!l8Am!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 848w, https://substackcdn.com/image/fetch/$s_!l8Am!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 1272w, https://substackcdn.com/image/fetch/$s_!l8Am!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd49c11d7-1ab9-4c08-a070-2e7455bd105f_6696x4602.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Whether AI&#8217;s technical fundamentals support these deployment patterns remains an open question.</p><p>But this isn't the only way generational companies get built. Most of the defining companies of the last two cycles raised a fraction of what today&#8217;s AI leaders have, and they created extraordinary value.</p><p>Yes, times have changed. Capital markets are more competitive, timelines have compressed, and AI&#8217;s unit economics may not comp cleanly to past cycles.</p><p>But there&#8217;s a lesson in the data. Money isn't the scarce input in company-building.</p><p><strong>Capital only amplifies a great product, it cannot create one.</strong></p><p>Three or four years into AI, the fog is starting to lift. We understand more now about how this technology is built, what it can reliably handle in production, how it scales, and how it gets adopted. The surface area for innovation is much wider than where capital has pooled so far.</p><p>The AI story is far from written.</p><div><hr></div><h3><strong>02 |  </strong>Software isn&#8217;t dying</h3><p>The evolution of software is essentially a long-term project in making computers easier to talk to.</p><p>Computer programming started at the level of bits (1s and 0s), then moved to machine code, then to foundational languages like FORTRAN, then higher-level ones like Python and TypeScript.</p><p>Each transition moved us up the ladder of abstraction, bringing programming closer to how humans think and hiding the technical friction of the layer below.</p><p><strong>AI moves us up another layer &#8211; from code to </strong><em><strong>intent</strong></em><strong>.</strong> Instead of running pre-written commands, the user only specifies an objective, and the machine handles the rest.</p><p>More specifically, AI is changing software in three fundamental ways:</p><div><hr></div><p><em><strong>#1  Deterministic &#8594; probabilistic</strong></em></p><blockquote><p>Software will now produce a distribution of outcomes, rather than executing a pre-programmed path.</p></blockquote><p><em><strong>#2  Human-operated &#8594; agent-operated</strong></em></p><blockquote><p>A growing share of software interactions will be initiated by machines, with humans as orchestrators.</p></blockquote><p><em><strong>#3  Fixed &#8594; malleable</strong></em></p><blockquote><p>Software will be generated and customized on-demand directly by the end-user rather than the vendor.</p></blockquote><div><hr></div><p><strong>Moats now have to come from what raw intelligence cannot recreate on its own. </strong>For example, accumulated memory and context, instrumentation that turns activity into a new type of data asset, or new distribution models that bypass one-to-many channels like SEO and app stores.</p><p>Software isn&#8217;t going anywhere. It's just abstracting upward, the way it always has. It&#8217;s taking a new shape, with new sources of defensibility underneath.</p><p>If AI makes software easier to talk to, I expect we'll see a lot more of it.</p><div><hr></div><p>Below are three ideas that follow from this view.</p><p>This post is in part a broadcast, to find like-minded founders working on these problems.</p><p>But it&#8217;s also a creative exercise. A chance to set aside the &#8220;end of software&#8221; and &#8220;AI will commoditize everything&#8221; narratives for a moment and to imagine the products that could drive long-term value in this new paradigm.</p><div><hr></div><h4><strong>Thesis A: </strong>An e-commerce brain</h4><p>Shopify (NASDAQ: SHOP, $156B EV) and Amazon (NASDAQ: AMZN, $2.8T EV) represent opposing strategies in online commerce.</p><p>Shopify gives merchants full ownership of brand, customers, and data. Each shop runs independently, merchant data stays siloed, and discovery happens off-platform through SEO and paid acquisition. Shopify never aggregates shopper demand to put merchants in competition with one another.</p><p>Amazon does the opposite. They standardize the buyer experience, control the platform end-to-end, and use merchant data to optimize the marketplace. Merchants are inventory feeders, with little control over their brand and data.</p><p>Both companies have started retrofitting for AI. Shopify now lets stores <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=9f08621cbf&amp;e=abab8c0019">syndicate</a> products into ChatGPT and Gemini and recently launched merchant-side AI <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=07985d76bd&amp;e=abab8c0019">tooling</a> to automate store operations. Amazon launched their AI shopping assistant <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=4af04f1171&amp;e=abab8c0019">Rufus</a> for product discovery. Each has adopted standards like <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=9caceff6b3&amp;e=abab8c0019">ACP</a> and <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=03edb5675b&amp;e=abab8c0019">UCP</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> to streamline agent-driven transactions.</p><p><strong>But both are bolting AI onto an architecture that keeps buyer and seller context separate.</strong></p><p>AI is only as smart as the information you put in front of it. You can stitch buyer and seller data together at the moment of inference. But product discovery and matching get sharper and cheaper when both sides live in the same ecosystem.</p><p><strong>The opportunity is to build the first AI-native commerce platform where buyer, product, and merchant context live in a single, shared memory graph. </strong>One substrate that both sides read from and write to continuously.</p><p>This unlocks deep personalization and automation at scale &#8211; a personal shopping assistant for users, and a modern operating system for merchants:</p><ul><li><p><strong>Merchant. </strong>A new shop owner types <em>&#8220;Launch a new pottery store to sell my work.&#8221;</em> The system runs structured onboarding to build the store, assembling the store with the best tools for each task &#8211; e.g., Lovable for storefront design; Gemini for creative assets; custom agents for BI, inventory, and customer support. Running the shop happens through natural-language commands &#8211; <em>&#8220;raise prices on mugs 10%,&#8221;</em> or <em>&#8220;summarize sales this week.&#8221;</em> The merchant focuses on brand, product, and strategy. The system handles the rest.</p></li><li><p><strong>Buyer.</strong> Each user is assigned a personal agent that learns the shopper over time &#8211; body type, fit, taste, brands they love, returns and why. The shopper profile deepens through natural conversation &#8211; e.g., <em>&#8220;Buy from sustainable brands,&#8221;</em> or <em>&#8220;show me items in this color palette,&#8221;</em> uploading a reference photo. The ecosystem becomes a destination, a place you go to connect with your digital concierge.</p></li><li><p><strong>Shared memory.</strong> Every interaction from shoppers and merchants &#8211; query, click, purchase, return, store redesign &#8211; writes to the same context graph, capturing rich semantic context for every entity in the ecosystem. With more data and use, the graph compounds into the most valuable asset in the network.</p></li><li><p><strong>Discovery network.</strong> After a shopper query, a retrieval layer narrows the catalog to a smaller candidate set. An LLM then reads buyer and product context and reasons to determine fit. The match is grounded in <em>why this product works for this person</em>, not in how similar buyers behaved.</p></li></ul><p><strong>This redefines the distribution and business model for online shopping.</strong></p><p>Today&#8217;s half-trillion dollar digital ad industry is really just a workaround for siloed data &#8211; a guessing game that understands buyers and products separately, then tries to match them at query time.</p><p>When discovery is grounded in real fit and deep context on both sides of the market, the network can monetize satisfaction and loyalty &#8211; not ad placement or click-through rates.</p><div><hr></div><h4><strong>Thesis B: </strong>Automating product taste</h4><p>For fifty years, software was built to deliver <em>certainty</em>.</p><p>Developers wrote instructions in code, specifying exactly how an application should perform. The same input was designed to produce the same output every time.</p><p>Because of this predictability, we sliced the software development lifecycle into discrete, sequential stages &#8211; plan, write, test, deploy, monitor. Value accrued to the companies that owned the artifact produced in each step. Jira owned the ticket. GitHub owned the commit. Datadog owned the log.</p><p><strong>AI-native software inverts this.</strong></p><p><em>Uncertainty</em> is a core property of LLMs. The same prompt to the same model yields a distribution of plausible responses. &#8220;Good&#8221; output is a moving target that shifts with new user behavior, context, and model improvements.</p><p>Today, developers try to impose certainty on AI by wrapping the model in scaffolding, using techniques like prompt templates, hard-coded guardrails, handwritten eval sets.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Existing LLM observability platforms like Braintrust and LangSmith support this work by giving a window into model performance, but they rest on a faulty premise &#8211; that humans should still draft and run evaluation as a discrete step in a workflow.</p><p><strong>In a world of model variance, where software &#8220;behavior&#8221; is constantly shifting, that becomes a bottleneck.</strong></p><p>AI systems instead need a self-orchestrated loop &#8211; real-time, automatic evaluations that keep performance aligned with developer intent. Taste &#8211; the judgment about what &#8220;good&#8221; product performance looks like &#8211; has to be encoded in the system itself, not maintained by humans from the outside.</p><p>The target product is a gateway proxy between the application and its model providers. Prompts, responses, tool calls, and metadata flow through it. Behind the gateway are two components:</p><p><strong>(1) Autonomous evaluation. </strong>The system generates its own rubrics from three inputs &#8211; the application design, live user behavior, and the full execution trace of each model call. Every call in production is scored against the rubric, which self-refines as the system learns which configurations correlate with successful outcomes.</p><p>Auto eval is difficult to build. LLM judges can already <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=b89e4c7d04&amp;e=abab8c0019">match</a> humans on many evaluation tasks, but auto-generated rubrics still <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=404d55fb21&amp;e=abab8c0019">underperform</a> human-authored ones. Closing that gap is the core technical bet.</p><p>The good news is this has become much more feasible in the last 18 months. For example, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=cf41c8d0bc&amp;e=abab8c0019">research</a> on scalable rubric generation points toward a future where the data flowing through a product like this could continuously refine the rubric layer.</p><p><strong>(2) Tiered remediation. </strong>When behavior drifts from intent, the system can take action &#8211; rerouting, retrying with different context, or alerting the developer with a natural-language explanation.</p><p>This product treats evaluation not as instrumentation the developer bolts on, but as something invisible, automatic, and always on. The product is working if the engineer never has to write the full rubric directly.</p><p><strong>The moat is the data model.</strong></p><p>In this product, each rubric criterion is a first-class entity that links to outcomes and refines automatically as the system learns which criteria predict success. Over time, the rubric becomes deeply attuned to the specific application. The result is a custom, self-improving evaluation layer that's difficult to rip out.</p><p>Building this is no small technical feat. But the team that gets it right will encode product taste, turning it from a craft engineers practice from the outside into a native property of software itself.</p><div><hr></div><h4><strong>Thesis C: </strong>Risk layer for robotics</h4><p>By 2035, autonomous systems are projected to represent a multi-trillion-dollar opportunity, with autonomous vehicles and robotics as the largest segments.</p><p><strong>Commercial deployment of these assets demands a fundamental rethinking of how to analyze and price risk.</strong></p><p>Traditional insurance assumes risk is a function of human behavior. Actuarial models analyze historical claims and demographic data, and assume behavior stays statistically consistent over time.</p><p>But when the &#8220;driver&#8221; is software, that breaks. Risk needs to be measured continuously, by technical inputs like software version, perception quality, the decision-making policy, and whether the system is operating inside the conditions it was designed for.</p><p>The incumbents &#8211; Verisk (NASDAQ: VRSK, $26B EV), Relx (NYSE: RELX, $72B EV) &#8211; have built their products and customer relationships around episodic claims. Analyzing risk from telemetry streams is a different problem entirely, and would require an overhaul of their infrastructure.</p><p><strong>There&#8217;s an opportunity now to build something from scratch.</strong></p><p>The target product is an API-first middleware layer. A lightweight agent installed on the autonomous asset ingests live telemetry (sensor health, perception logs, near-miss data), runs it through in-house AI/ML models, and generates dynamic risk scores for carriers and operational insights for fleet operators.</p><p>With a first-mover advantage, this platform can set the schema and actuarial foundation the entire category prices against, similar to the role<a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=00e76ebcfa&amp;e=abab8c0019"> ISO&#8217;s loss costs</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> play today for property claims. Once a large enough dataset is amassed, multiple business lines follow &#8211; analytics to operators, revenue on claims forensics, licensed data to physical AI companies for training.</p><p><strong>The winning product unlocks data network effects.</strong></p><p>For example, if the platform learns that a specific LiDAR<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> blockage pattern predicts crashes in delivery robots, it can price the same failure mode in farm equipment immediately, without waiting years for new claims. As more fleet types contribute data, underwriting improves, adoption accelerates, and the data edge compounds.</p><p>Autonomous fleets cannot deploy at scale without insurance, and carriers cannot underwrite without telemetry. The platform that resolves this standoff has the opportunity to establish itself as the de facto standard &#8211; and make our warehouses and factory floors safer along the way.</p><div><hr></div><p>Many of these early AI titans are building extraordinary technology. Whether the capital flowing to them reflects durable value or cycle dynamics, time will tell.</p><p>But what I do believe is that the opportunity beyond them is enormous.</p><p>AI, after all, is not a product in and of itself. It&#8217;s a precondition. Raw material. And there&#8217;s still an enormous amount of shaping left to do.</p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>Agent Commerce Protocol (ACP)</em> is an open specification co-developed by OpenAI and Stripe that lets AI agents complete transactions on behalf of buyers. <em>Universal Commerce Protocol (UCP)</em> is an open standard co-developed by Google and Shopify that defines how AI agents discover products, manage carts, and complete checkouts across merchant backends.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>Prompt templates</em> are reusable scaffolds that structure how a model is asked to do something. <em>Guardrails</em> are rules that constrain what the model can output (filtering for tone, blocking topics, enforcing format). <em>Eval sets</em> are curated test cases used to check whether changes to a prompt or model improved performance. For a good primer on evaluations, <a href="https://hamel.dev/blog/posts/evals-faq/">click here</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>ISO</em> (Insurance Services Office, now part of Verisk) publishes standardized &#8220;loss cost&#8221; data &#8211; historical claims experience aggregated across carriers &#8211; that most property and casualty insurers use as the actuarial baseline for pricing their policies. It&#8217;s the de facto reference layer for the industry.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>LiDAR (Light Detection and Ranging)</em> is a sensor type that fires rapid laser pulses to measure distances and build a real-time 3D map of the environment. It&#8217;s a core perception input for most autonomous vehicles and robots.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Bespoke, at Scale]]></title><description><![CDATA[A Conversation with Surojit Chatterjee]]></description><link>https://www.thetimes.blog/p/bespoke-at-scale</link><guid isPermaLink="false">https://www.thetimes.blog/p/bespoke-at-scale</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Wed, 01 Apr 2026 19:35:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PaK5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For decades, we&#8217;ve accepted a basic tradeoff: products scale, customization does not.</p><p>The beauty of software is its near-zero marginal cost. Build one product, ship it to millions. But tailoring a product to each individual customer used to mean extra code and one-off integrations, expenses that grow linearly with each new client.</p><p>AI is upending this tradeoff. The marginal cost of customization is falling precipitously.</p><p>When software can reshape itself around each organization &#8211; learn its processes, adapt with user feedback &#8211; customization <em>becomes</em> the product.</p><p><strong>Companies can now sell bespoke, at scale.</strong></p><p><a href="https://x.com/surojit">Surojit Chatterjee</a> was early to this shift. He&#8217;s the founder and CEO of <a href="https://www.ema.ai/">Ema</a>, a platform that builds autonomous AI employees for large companies like<a href="https://www.hitachi.com"> </a>Hitachi and ADP.</p><p>Before Ema, Surojit joined Coinbase as their Chief Product Officer through its 2021 IPO. He also spent over a decade at Google, scaling Shopping and Mobile Ads into multi-billion dollar businesses.</p><p>He and the team built Ema on two core technologies. Their Generative Workflow Engine (GWE) takes organizational goals, assembles specialized sub-agents to pursue them, and orchestrates execution in real-time. Underneath that sits EmaFusion, a routing layer that matches each task to the best model across 100+ LLMs.</p><p>Surojit has spent years helping large organizations deploy AI. As agentic systems move from demos to large-scale workloads, I wanted his perspective as someone who has seen the adoption curve firsthand.</p><p>What struck me in our conversation was how much traditional software has constrained our thinking. There are countless startups falling into the trap of running the SaaS playbook on top of a fundamentally different product and unit economic model.</p><p>Surojit left me with a clearer picture of how AI is actually being deployed today. And with some optimism that these systems, if managed well, can free people to pursue more meaningful work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PaK5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PaK5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 424w, https://substackcdn.com/image/fetch/$s_!PaK5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 848w, https://substackcdn.com/image/fetch/$s_!PaK5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 1272w, https://substackcdn.com/image/fetch/$s_!PaK5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PaK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png" width="418" height="223.4971098265896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:370,&quot;width&quot;:692,&quot;resizeWidth&quot;:418,&quot;bytes&quot;:158717,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/192876486?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PaK5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 424w, https://substackcdn.com/image/fetch/$s_!PaK5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 848w, https://substackcdn.com/image/fetch/$s_!PaK5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 1272w, https://substackcdn.com/image/fetch/$s_!PaK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc40194d1-e3d8-4c1f-acdf-a22d5e628758_692x370.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><h3><strong>01 |  </strong>&#8220;Custom experiences are the product.&#8221;</h3><p><strong>EO: One of the biggest bottlenecks in AI adoption right now is discovery. Customers know AI is powerful. But they don&#8217;t have a clear mental model for which tasks can reliably be handed to an agent or how to fold that agent into a preexisting workflow.</strong></p><p><strong>How do you handle discovery? Walk me through what happens when Ema sits down with a new customer.</strong></p><p>SC: Customers typically come in with some preconceived ideas about how they want to implement AI. We try to get them to think more holistically, and so we rarely end up building their first request.</p><p>We start with what we call an Agentic Clinic, a half-day session with stakeholders to map their business processes and identify where AI can have an impact. At the end of the Clinic, we align around two or three high-priority opportunities.</p><p>Within 24-48 hours, we build a working prototype for one of those workflows, often with some ready-made components from our platform. That turnaround speed matters, because seeing AI work in practice is what customers usually respond to.</p><p>From there, we connect the prototype to their actual data and show them how the agents run in their environment.</p><p><strong>So how do you get customers thinking beyond their priors, to think more broadly about what AI is capable of delivering?</strong></p><p>Probably best to answer this with an example.</p><p>We were working with a Fortune 500 CHRO who wanted to rethink performance management. She had an existing software tool, built around collecting data and conducting reviews across the company. That process was onerous so they only ran it every six months.</p><p>At first, she asked us to build an agentic replacement. Same workflow, just automated.</p><p>We told her: &#8220;Don&#8217;t think about replacing your existing tool. Think about reimagining the entire process.&#8221;</p><p>With AI, you can now give employees feedback daily. You can read code commits live, to assess the productivity of the engineering team. You can give sales reps on-demand coaching by interpreting CRM data and call transcripts.</p><p>We were able to build and deploy this new feedback system quickly across 500,000 human employees. The company went from reviews every six months to real-time coaching.</p><p>That&#8217;s performance management reimagined with AI. Those opportunities exist across every enterprise function.</p><p><strong>Performance management is an interesting example because it&#8217;s inherently bespoke. OKRs, performance criteria, review cycles&#8230; these are unique to each company&#8217;s operations and culture.</strong></p><p><strong>How do you make sure those nuances don&#8217;t get lost when you hand work over to agents?</strong></p><p>This is a really important point. With AI, custom experiences are no longer services. Custom experiences <em>are</em> the product.</p><p>Think about it like this. In the past, if you wanted clothing that fit perfectly, you went to a tailor. It was expensive and took months, but the result was great. Then manufacturing brought ready-made clothes to market. Off-the-rack items don&#8217;t always fit perfectly, but they are instant and cheap.</p><p>SaaS is ready-made clothing. It doesn&#8217;t quite fit. So we layer on humans and custom builds to act as glue between what the product does and what the business needs.</p><p>Now, imagine tailor-made clothes that are instant and cheaper than off-the-rack. You&#8217;d never buy off-the-rack again. <em>That&#8217;s how AI is changing software.</em></p><p>AI agents are becoming the glue between the tooling and workflows. You tell the AI &#8211; &#8220;This is how I want performance management to work. Pull data from these systems. Synthesize it this way.&#8221;</p><p>It takes weeks to build, not months. And once deployed, these systems continue to watch which suggestions humans accept and reject. So the software adapts. It gets more sticky to your organization over time.</p><p><strong>So it sounds like one big difference between deploying SaaS versus AI is the degree of iteration. With AI, you start with a baseline, people use it, and the system improves through use.</strong></p><p><strong>As a product manager, how do you get the customer to go on that journey? How do you prevent churn, when full functionality isn&#8217;t apparent upfront?</strong></p><p>You have to deliver impact right away. The customer needs to see the customization and ROI immediately.</p><p>Deploying Salesforce or Workday inside a large organization can take months because of all the required integration work. With AI we can deploy equivalent functionality in weeks.</p><p>But to drive retention in AI, the software needs to <em>keep</em> delivering magic moments. And it needs to do so on the customer&#8217;s timeline, not the vendor&#8217;s.</p><p>I&#8217;ll give you another example.</p><p>We have a customer with 250,000 employees across 65 countries. They use Ema for the entire employee lifecycle. Managing travel, employee verification, visa letters, tax forms, and so on.</p><p>Before Ema, employees were navigating over 100 SaaS applications for these workflows. I&#8217;m not kidding! Layers and layers of software to complete the simplest task.</p><p>All of that was instantly abstracted away into a UI that Ema generates on the fly. But we also designed the system to be extensible by the user. Our agents can generate new workflows using natural language.</p><p>So every few weeks, we&#8217;re seeing new capabilities added by the employees themselves. Not because we shipped an update, but because the users prompted the system to handle something new.</p><p>There&#8217;s a natural mechanism in the product &#8211; this extensibility &#8211; that keeps these magic moments coming.</p><div><hr></div><h3><strong>02 |  </strong>Don&#8217;t let agents go rogue</h3><p><strong>If employees can build new workflows on their own, that changes the role of the software vendor.</strong></p><p><strong>You&#8217;re no longer building all the features. But you are responsible for the system&#8217;s </strong><em><strong>boundaries,</strong></em><strong> so that the AI stays aligned with its intended purpose.</strong></p><p><strong>That&#8217;s a tricky product challenge. Because in any enterprise agents will have competing goals.</strong></p><p><strong>For example, a support agent needs to balance customer satisfaction with cost control.</strong></p><p><strong>You don&#8217;t want the agent going rogue, approving out-of-policy refunds willy-nilly. But you also don&#8217;t want it so constrained that it follows a rigid set of refund rules, and churns the customer.</strong></p><p><strong>How do you balance deterministic<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> guardrails with the non-deterministic judgment that makes agents valuable?</strong></p><p>We deploy a hybrid model.</p><p>Our AI systems are designed as a chain of specialized agents working together. Agent A completes a task and passes its output to Agent B, who passes to Agent C, and so on.</p><p>Broadly speaking, the <em>handoffs</em> between agents &#8211; which agent works on what, in what order &#8211; are coded rules. But what each agent does <em>within</em> each step is non-deterministic. Meaning the AI can exercise its own reasoning to deal with ambiguity and variation within its scoped task.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UEFF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UEFF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 424w, https://substackcdn.com/image/fetch/$s_!UEFF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 848w, https://substackcdn.com/image/fetch/$s_!UEFF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 1272w, https://substackcdn.com/image/fetch/$s_!UEFF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UEFF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png" width="1456" height="790" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:790,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:355040,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/192876486?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UEFF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 424w, https://substackcdn.com/image/fetch/$s_!UEFF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 848w, https://substackcdn.com/image/fetch/$s_!UEFF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 1272w, https://substackcdn.com/image/fetch/$s_!UEFF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe8d840-2789-4cd2-80bf-c1dbd9aeb921_2645x1436.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Source: <a href="https://www.ema.ai/blog/agentic-ai/generative-workflow-engine-building-emas-brain">https://www.ema.ai/blog/agentic-ai/generative-workflow-engine-building-emas-brain</a></h6><p></p><p>This architecture lets us track which sequences work best. And when a given execution path works consistently &#8211; a human approves the outcome, a ticket resolves cleanly &#8211; we feed that pattern into what we call <em>procedural memory</em>, part of the company&#8217;s long-term memory layer.</p><p>So the deterministic components &#8211; the order of operations, which agent handles which step &#8211; provide the guardrails. But within each step, the agent is still reasoning independently.</p><p><strong>As model capabilities improve, the boundary between what should be deterministic and non-deterministic is constantly shifting.</strong></p><p><strong>How do your developers stay current, to continue to strike the right balance between determinism and non-determinism in the product?</strong></p><p>That&#8217;s a very evolved question, a very important question that gets into the crux of building AI-native products.</p><p>Our goal is to build <em>less</em> determinism in the system over time. Every hardcoded step means more maintenance. We&#8217;d rather hand off a decision to an LLM &#8211; <em>if</em> we are confident it can handle that decision reliably &#8211; than rewrite the code as new edge cases pop up.</p><p>We have a small model called EmaFusion that helps us calibrate how much determinism is needed at any point in time. It dynamically assesses the capabilities of every major LLM. When a new version of Claude or Gemini drops, or a new open-source model appears, EmaFusion experiments with different model combinations to see which performs best on a specific task.</p><p>This gives us good visibility into what types of work models can reasonably handle and where they fall short and need more hard-coded logic.</p><div><hr></div><h3><strong>03 |  </strong>The model wars</h3><p><strong>What&#8217;s cool about EmaFusion is you have unique visibility into the relative performance of all major AI models, in real production environments.</strong></p><p><strong>Play the model wars out a year or two.</strong></p><p><strong>What do you think the landscape looks like? For example, will models <a href="https://www.thetimes.blog/p/ai-models-are-converging">converge</a></strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a><strong>? Will open-weight models gain or lose market share?</strong></p><p>I think the frontier models &#8211; Gemini, Claude &#8211; will asymptotically converge in terms of raw capabilities and price. On a benchmark like HLE<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>, everyone will score about the same.</p><p>But on actual enterprise tasks &#8211; accounts receivable optimization, policy document synthesis &#8211; you&#8217;ll see real differences among models. App builders will need to factor those in.</p><p>We&#8217;re at an important moment in model development.</p><p>As production scales, models will optimize further around the tasks they&#8217;re already good at, the data their customers continue to give them access to, and the business models they see working.</p><p>From what I can see, ChatGPT skews toward consumer. Claude skews enterprise. Gemini wants to play in both, and Google has a ton of data to let them do that. And those starting points may influence where capabilities move over time.</p><p>At Ema, our product is built on the premise that there will always be a complex ecosystem of models, each with their own tendencies and capabilities.</p><p>One model might be more encouraging, another more cautious. One stronger at coding, another at summarization.</p><p>By routing across many different models, we can average out those individual quirks. So the system isn&#8217;t overly influenced by any single model&#8217;s biases, and the enterprise gets consistent output.</p><p><strong>In your data, anything you&#8217;re seeing that defies market consensus, that goes against what most believe about these providers?</strong></p><p>When I look at the dynamic trace &#8211; which models EmaFusion selects for each subtask &#8211; there is no &#8220;winning&#8221; model it consistently favors.</p><p>And often, the cutting edge model is not the best choice. For example, for parsing a 30,000 page PDF, a small OCR<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> model we built internally outperforms Claude.</p><div><hr></div><h3><strong>04 |  </strong>Sell simplicity</h3><p><strong>Today, there are hundreds of vertical AI startups &#8212; legal AI, healthcare AI, accounting AI.</strong></p><p><strong>The thesis for these companies is that sector-specific regulation, distribution, and data taxonomies create a unique set of requirements. And so generalist platforms can&#8217;t compete.</strong></p><p><strong>You&#8217;ve made the opposite bet. That the underlying tasks AI performs &#8211; parsing, research, writing &#8211; are fundamentally content-agnostic. And that well-coordinated agents, specialized for these different subtasks, can handle most enterprise workflows, regardless of sector.</strong></p><p><strong>Why do you think horizontal AI represents a more durable product strategy?</strong></p><p>Before transformers<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>, the industry assumption was that AI had to be very specific. In early image detection, for example, we had different models for identifying cats versus dogs.</p><p>But as we built and scaled LLMs, it turns out that training on a <em>variety</em> of information makes general reasoning more robust.</p><p>It&#8217;s like the human brain. If my kid only studies math and never learns language or music, they won&#8217;t actually be better at math.</p><p><em>Diversity of input strengthens the underlying reasoning capability.</em></p><p>That same principle applies to AI systems. The core operations &#8211; parsing, research, synthesis, execution &#8211; are the same across domains. Sure, we need to do some extra work to train on the specific vocabulary of HR or finance. But fundamentally, the tasks are the same.</p><p>A horizontal platform sharpens that reasoning across every domain it serves. A vertical company only learns from one. That&#8217;s why, maybe counterintuitively, a horizontal platform actually lets us build vertical-specific AI products better and faster.</p><p><strong>I also think a lot of AI startups copied the SaaS playbook as the default without interrogating how the underlying product economics have shifted.</strong></p><p>That&#8217;s right. In traditional SaaS, every workflow and integration has to be explicitly coded. Expanding scope means increasing your engineering spend.</p><p>So vendors specialized. They built separate apps for different sectors and corporate functions, each with its own data model, team, and sales motion. Over time, those applications got bolted together. They turned into monster apps like ServiceNow, with endless configuration knobs and months-long setup.</p><p>Then AI came along. The first instinct was to apply these new AI capabilities to the same specialized workflows. So we get legal AI, healthcare AI, finance AI. But that specialization only exists in the first place because customized SaaS tools were usually too expensive to build and maintain.</p><p>AI doesn&#8217;t have those constraints. So we don&#8217;t have to build products for these silos any longer.</p><p><strong>AI seems to challenge the assumption that sector is the right organizing principle for software.</strong></p><p>The way I think about it for an enterprise buyer &#8211; they just want <em>simplification</em>.</p><p>We&#8217;ve over-rotated on specialized software. Right now, one organization might have five apps that do roughly the same thing. And now we&#8217;re bolting hundreds of agents on top of those five different systems. That&#8217;s a lot of unnecessary complexity.</p><p>I spent 15 years at Google. There was a time when people thought separate, vertical search engines would win. We talked about it a lot internally. Turns out, people just want one place to go.</p><p>It&#8217;s the same for enterprises. Complexity is killing the enterprise.</p><p><strong>So if simplification is what enterprises are principally buying with AI, that raises a product question &#8211; simplify </strong><em><strong>what</strong></em><strong>?</strong></p><p><strong>And what I&#8217;m hearing you say is that AI-native companies need to conceptualize work at the goal level, not the task level. Start with the outcome, not the existing tooling.</strong></p><p><strong>If you&#8217;re solving for performance management, the question isn&#8217;t, &#8220;how do we automate data collection for our HRIS?&#8221; It&#8217;s &#8211; &#8220;what do our employees need to improve?&#8221; The former automates semi-annual reviews. The other gets you real-time feedback as people work.</strong></p><p><strong>Goal-level framing doesn&#8217;t fit inside one software category. &#8220;What do our employees need to improve?&#8221; pulls from HR, IT, finance &#8211; all at once. That&#8217;s what pulls you toward a horizontal product.</strong></p><p><strong>And it also simplifies things for the customer. You end up with one system organized around the employee, not five organized around departments.</strong></p><p>Exactly. And right now, there&#8217;s a leapfrog happening. Large enterprises can go from very old legacy software to an agentic solution, and skip over using the modern cloud tools with prescriptive workflows.</p><div><hr></div><h3><strong>05 |  </strong>The great displacement?</h3><p><strong>Worker displacement is an important topic right now.</strong></p><p><strong>Optimists point to prior technology cycles. There will be some displacement, but on net AI will take away tedious work and create new types of jobs.</strong></p><p><strong>Doomers say this time is different. That the speed and breadth of AI will break the historical pattern and cause mass displacement.</strong></p><p><strong>You see firsthand how enterprises are deploying AI, and how humans interface with the technology every day. Where do you come down?</strong></p><p>Right now, large enterprises are not radically displacing people.</p><p>Most employees are overworked. Think about a nurse at a hospital, or an HR team buried in tickets. In my experience, AI is mostly freeing them from administrative tasks &#8211;&nbsp;clearing a backlog of work, and making employees more productive as a result.</p><p>In the near term, I think that trend will continue. And as a result, people will shift more of their time to core, higher-value tasks.</p><p>Ten years out, major re-skilling is probably needed. And within that time frame, some will feel the impact faster than others. Customer support reps and software engineers, for example.</p><p>But even those roles won&#8217;t just disappear. They will look different. Workers will have to reconfigure their jobs around AI.</p><p>Software engineers, for example, will have to learn to work with agents to build software, rather than actually writing software themselves. They&#8217;ll still be responsible for building and maintaining systems, even if they are not primarily doing the coding.</p><p><strong>I&#8217;m mostly going off intuition, but I think there&#8217;s a lot of latent demand in the economy right now. Backlogs everywhere. Employees are overloaded. Managers are getting increasingly overwhelmed, which leads to short-term thinking and risk-aversion.</strong></p><p><strong>If AI clears some of that backlog and lets teams get to the work they&#8217;ve been deferring, the floor rises for everyone. With less overhead, small businesses can now compete with larger companies. The economy expands around a new baseline.</strong></p><p>I think you will see a lot more companies in the economy, but they will run leaner.</p><div><hr></div><h3><strong>06 |  </strong>Your code is my opportunity</h3><p><strong>You have a unique seat. You&#8217;re deep in the cutting edge of AI capabilities. You also serve legacy companies and see how this tech gets adopted in practice.</strong></p><p><strong>How does that inform your own angel investing? Where do you see durable product value in AI?</strong></p><p>I think quantum computing and AI together will open up interesting opportunities.</p><p>Also, with so many agents everywhere, agentic security is an area of interest. I like talking to companies tackling governance and safety.</p><p>At the product level, I have a simple framework &#8211; is it solving real problems, and can it work at scale? There&#8217;s so much hype in AI. I need to see that the product works in production.</p><p>Also, speed and execution were always important in startups. But the time window is getting dramatically compressed.</p><p>We were at an HR conference recently. One of the other exhibitors, a legacy company, was describing their product. One of our solution architects was listening, and built a working version of their entire product. In 15 minutes, on top of Ema.</p><p>Trust, security, enterprise viability are going to matter a lot in a world where you can vibe code anything in 15 minutes.</p><p>Jeff Bezos famously said, &#8220;Your margin is my opportunity.&#8221; Well, now your code and people are my opportunity. The more code or bureaucracy you have, the slower you are going to be.</p><p><strong>I think many of the fundamentals still hold. Network effects, unique data &#8211; those still matter. Even if AI means you&#8217;re building those into the product in new and different ways.</strong></p><p>For us, we get more defensible the more we learn about our customers. Once we have a good context graph and procedural knowledge, the next deployments become easier and easier.</p><p>That&#8217;s also part of why we went horizontal. If our compounding asset is knowledge of the customer, then it gets easy to build different agentic systems on top of that, across different functions.</p><p><strong>Great stuff, a lot to think about. Thanks for doing this, Surojit.</strong></p><p>Thanks for having me.</p><p></p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>In computer science, a <em>deterministic</em> system always produces the same output given the same input. A <em>non-deterministic</em> system can produce different outputs for the same input, depending on how it reasons through the problem.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>Model convergence</em> is the hypothesis that frontier LLMs &#8211; OpenAI's GPT series, Anthropic's Claude, Google's Gemini &#8211; are trending toward similar capability levels over time. The underlying mechanism is explained by scaling laws. Performance is primarily a function of compute, data, and architecture. Because all major labs use transformer-based architectures, train on largely overlapping internet-scale data, and operate at comparable compute budgets, they tend to land on similar performance curves.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>HLE (Humanity&#8217;s Last Exam)</em> is a benchmark of 2,500 expert-level academic questions across 100+ subjects. Designed to test the limits of frontier models on problems requiring genuine reasoning, it remains one of the few major benchmarks where top models still score below 40%.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>OCR (optical character recognition)</em> is a specialized AI model designed to extract text from images, scanned documents, and PDFs. Unlike general-purpose LLMs, OCR models are optimized for parsing visual layouts &#8212; tables, handwriting, low-quality scans &#8212; where a frontier language model would be overkill or fail at scale.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The <em>transformer</em> is the architecture that underpins virtually all modern large language models. Its key innovation, the self-attention mechanism, allowed models to process entire sequences of text in parallel rather than word by word, dramatically improving both training efficiency and output quality.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Dark Matter of Work]]></title><description><![CDATA[This post maps out a new class of AI products that observe how people and agents work, and then encode what they learn as a new data layer.]]></description><link>https://www.thetimes.blog/p/the-dark-matter-of-work</link><guid isPermaLink="false">https://www.thetimes.blog/p/the-dark-matter-of-work</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Wed, 18 Mar 2026 12:30:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hNP0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post maps out a new class of AI products that observe how people and agents work, and then encode what they learn as a new data layer. These products are built to get stronger and more defensible as models improve.</p><div><hr></div><h3><strong>01 | </strong>The dark matter of work</h3><p>Right now is a difficult time to build and invest in new technology. Model capabilities are improving so fast that many AI-native products, even those with real utility and traction, are quickly becoming obsolete.</p><p>In my own diligence, I&#8217;ve had to think very, very hard about this question&#8230;</p><div class="pullquote"><p><em><strong>If models get 50x better, why does this product still have a right to exist?</strong></em></p></div><p>To reason through that, I start with what models are fundamentally good at.</p><p>Today&#8217;s models <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=b4d73baba3&amp;e=abab8c0019">transform</a> unstructured inputs into outputs. They do three things extraordinarily well: (i) <em>understand</em> and <em>generate</em> language, code, and images; (ii) <em>reason</em> through multi-step problems; and (iii) <em>execute</em> commands within software systems. They&#8217;re getting better at these tasks every day.</p><p>If your product&#8217;s value prop hinges <em>exclusively</em> on enhancing the AI along these dimensions &#8211; smarter reasoning, better retrieval and context management, deeper domain expertise &#8211; you&#8217;re one model release from obsolescence.</p><p>Durable product value has to come from somewhere else. Not from patching short-term gaps in intelligence capabilities, but by creating entirely <em>new</em> inputs for the AI to work with. </p><p><strong>Put differently, the best product strategy isn&#8217;t about making AI smarter &#8211; it&#8217;s about expanding what the AI can see. </strong>After all, any model, no matter how powerful, can only reason over what it observes.</p><p>This turns out to be an old concept.</p><p>In 1991, Ikujiro Nonaka published one of the most cited management papers in history &#8211; <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=a7817d37e6&amp;e=abab8c0019">The Knowledge-Creating Company</a>. His thesis is that organizations run on two kinds of knowledge.</p><p><em>Explicit knowledge</em> is easily codified &#8211; a process, a policy document, a decision tree. <em>Tacit knowledge</em> is craft. It lives in human intuition, practice, and judgment. It&#8217;s the customer success manager who knows which clients need a preemptive call. Or the engineer who senses which parts of the codebase are fragile before the logs pick it up in production.</p><p><strong>Tacit knowledge is the &#8220;dark matter&#8221; of work</strong> &#8211; the invisible reasoning, intent, and institutional memory that connects raw information to action. It doesn&#8217;t live in any system of record. It expresses itself ephemerally, through human behavior, in the course of day-to-day work.</p><p>That is, until now.</p><p>A new class of AI products is starting to capture this invisible layer &#8211; not by asking people to document what they know, but by directly observing how they work&#8230;</p><p>&#8230;<em>vision models</em> that perceive user activity inside applications across thousands of daily sessions&#8230;</p><p>&#8230;<em>language models</em> that extract reasoning from employee conversations scattered across Slack and email&#8230;</p><p>&#8230;<em>code provenance tools</em> that trace every AI-generated function back to the prompt, the agent&#8217;s chain of thought, and developer corrections.</p><p>These products aren&#8217;t bolting AI over a preexisting database. They&#8217;re producing a fundamentally new kind of data asset &#8211; a <em>dynamic, contextual layer</em> that never existed before.</p><div><hr></div><h3><strong>02 | </strong>Making the invisible visible</h3><p>What does this look like in practice? The strongest products in this category share four properties.</p><p><strong>1/ They generate a fundamentally new kind of data asset.</strong> One that has never appeared in any system of record.</p><p><strong>2/ They don&#8217;t just capture </strong><em><strong>what</strong></em><strong> happened &#8211; they encode </strong><em><strong>why</strong></em><strong>.</strong> Raw behavioral data gets translated into a representation of meaning, one machines can read and act on.</p><p><strong>3/ Their observation is continuous and self-reinforcing. </strong>The system quietly observes how people work, distills patterns, and feeds that back into the product to improve accuracy and automation. This in turn drives more usage, and therefore generates more data to learn from.</p><p><strong>4/ They create a data primitive that others build on.</strong> Structured context that third-party agents and products have an incentive to integrate with and consume.</p><p>A couple early products are already building in this direction:</p><div><hr></div><h6>EXAMPLE #1: DEV TOOLING // <a href="https://entire.io/">ENTIRE.IO</a></h6><p><strong>Entire</strong> is building a new storage and provenance layer for code.</p><p>The founding team came from GitHub and saw firsthand the limitations of git<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a><strong><sup> </sup></strong>in the AI era. Specifically, git stores code as text files and tracks version history, but has no mechanism to capture <em>what the code means</em> or <em>why it was written</em>.</p><p>That was fine when humans wrote every line and held the design in their heads. It breaks when code production is outsourced to AI agents.</p><p>Entire solves this by making this invisible layer &#8211; the history and meaning of the code &#8211; explicit.</p><p>Their first product, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=56125f5539&amp;e=abab8c0019">Checkpoints</a>, automatically captures the full agent coding session from the IDE<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> &#8212; all prompts, agent output and reasoning, and developer corrections. When the developer commits<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> to the repository, that context is permanently linked to the code as a structured, queryable asset, available to every future agent and engineer who works on the project.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hNP0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hNP0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 424w, https://substackcdn.com/image/fetch/$s_!hNP0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 848w, https://substackcdn.com/image/fetch/$s_!hNP0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 1272w, https://substackcdn.com/image/fetch/$s_!hNP0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hNP0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png" width="1344" height="1023" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1023,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:137347,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/191322786?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hNP0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 424w, https://substackcdn.com/image/fetch/$s_!hNP0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 848w, https://substackcdn.com/image/fetch/$s_!hNP0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 1272w, https://substackcdn.com/image/fetch/$s_!hNP0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa71aff2-162d-4ac7-be93-f5116da51f96_1344x1023.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h5>Sample session summary for a single commit, showing the developer&#8217;s original prompt, the agent&#8217;s tool calls and reasoning, and the resulting code changes. <a href="https://thenewstack.io/thomas-dohmke-interview-entire/">[source]</a></h5><p></p><p>Here, every coding session contributes to the system&#8217;s understanding of the code, its interdependencies, and how it came to be. Over time, this turns the code repo from a static storage archive into a <em>living record</em>, one that gets more robust with every commit.</p><p>And better models only amplify that advantage, because as smarter agents come to market, they can extract more and new types of value from the full context and history.</p><div><hr></div><h6>EXAMPLE #2: HORIZONTAL SOFTWARE // <a href="https://www.worktrace.ai/">WORKTRACE.AI</a></h6><p><strong><a href="http://worktrace.ai">Worktrace</a></strong> is building a next-generation RPA<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> product. The founder came from OpenAI and saw that the biggest bottleneck to enterprise AI adoption isn&#8217;t the technology &#8211; it&#8217;s knowing <em>what</em> workflows to automate, and <em>how</em> to do so.</p><p>Today, figuring out what to automate is manual and slow &#8211; teams either map processes by hand or rely on software tools in UiPath or Zapier that capture surface-level behavior like clicks and data flows.</p><p>Worktrace replaces that.</p><p>The product integrates with existing tools like Slack and Linear, quietly observing how employees work. But it goes beyond surface-level process mapping &#8211; it parses <em>intent</em>, capturing <em>why</em> employees take certain actions and <em>how</em> they complete the same task in different ways. Why does a coordinator skip certain steps under time pressure? Why do two people follow different paths to the same outcome?</p><p>The output is a prioritized map of automation opportunities, ranked by potential impact, with a spec that outlines the right AI model and agent design for each task.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jHkO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jHkO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 424w, https://substackcdn.com/image/fetch/$s_!jHkO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 848w, https://substackcdn.com/image/fetch/$s_!jHkO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 1272w, https://substackcdn.com/image/fetch/$s_!jHkO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jHkO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png" width="1390" height="965" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a21b853-cafa-4547-9587-18a784851051_1390x965.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:965,&quot;width&quot;:1390,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:142154,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/191322786?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jHkO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 424w, https://substackcdn.com/image/fetch/$s_!jHkO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 848w, https://substackcdn.com/image/fetch/$s_!jHkO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 1272w, https://substackcdn.com/image/fetch/$s_!jHkO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a21b853-cafa-4547-9587-18a784851051_1390x965.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h5>Sample discovery dashboard, ranked by automation impact (hours saved), with the applications involved and number of steps for each. <a href="https://www.upstartsmedia.com/p/worktrace-ai-automation-openai">[source]</a></h5><p></p><p>Rather than competing on &#8220;better automation,&#8221; Worktrace owns something far harder to replicate &#8211; a nuanced and dynamic understanding of how employees <em>actually</em> work and where AI can have the most impact.</p><p>As models improve, two things happen: (1) the live observation captures richer, more complex workflow patterns, and (2) Worktrace can revisit its accumulated history to surface automations that earlier models couldn&#8217;t detect.</p><p>In other words, the product only gets more valuable as AI accelerates.</p><div><hr></div><h3><strong>03 | </strong>The anatomy of a durable product</h3><p>There are a number of other products &#8211; across productivity, vertical SaaS, and developer tools &#8211; developing a similar architecture. Each looks different on the surface. But underneath, they share the same product template.</p><p></p><blockquote><p><strong>Step 1: A perception mechanism</strong></p></blockquote><p>These products need a way to observe activity and behavior that was previously invisible to software, by watching activity inside and across applications.</p><p>This capability is distinct from a <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=aa2b904acd&amp;e=abab8c0019">knowledge or context graph</a>, which organizes relationships between data artifacts that existing systems already produce. For example, a <em>knowledge graph</em> might map the relationship between a support ticket, the customer's account history, and the agent who escalated it. But all of that data already lives in a database somewhere. An <em>observation model,</em> on the other hand, captures what doesn't &#8211; the Slack side-channel that explains why the ticket was escalated in the first place.</p><p>A few tailwinds are making this observation capability viable and more efficient.</p><p>Techniques like distillation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> and quantization<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> now allow small, purpose-built models to <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=57ccdfc47b&amp;e=abab8c0019">run locally</a> on phones and laptops, enabling simple tasks in the observation flow like data capture and entity extraction to happen at the edge. This can deliver a cheaper and more secure architecture, since raw data doesn&#8217;t need to leave the device.</p><p>Additionally, multimodal models can now process vision, audio, and text simultaneously, rather than requiring separate models for each. As multimodal capabilities improve, this dramatically expands the range of behavior that software can perceive and analyze.</p><p></p><blockquote><p><strong>Step 2: Converting raw inputs into meaning</strong></p></blockquote><p>Most of the product value &#8211; the &#8220;secret sauce&#8221; &#8211; lives in how these systems convert perceptions (from Step 1) into structured, semantic data.</p><p>The exact tactics and implementation varies, depending on the product and category. But there are a few things to look for when evaluating whether a product is doing this well:</p><ul><li><p>Is the output schematized into defined entities, fields, and relationships? Is it something a machine can read?</p></li><li><p>Is the output transferable? Can it be plugged directly into a third-party agent builder, API, or automation pipeline with minimal transformation?</p></li><li><p>Does the output encode intent and decision logic (&#8220;why&#8221; something happened), in addition to behavioral telemetry (&#8220;what&#8221; happened)?</p></li><li><p>Can the system successfully handle edge cases and variability, recognizing when different actions are targeting the same end goal?</p></li></ul><p></p><blockquote><p><strong>Step 3: Setting up the feedback loop</strong></p></blockquote><p>Output from Step 2 flows back into the product. This is typically done through either: (i) <em>context enrichment</em> (prior data and observations are stored and organized in a way that helps agents easily find and access in future sessions), or (ii) <em>model adjustment and fine-tuning</em> (interaction data and output is used to retrain the models doing observation in Step 1 and conversion in Step 2).</p><div><hr></div><h3><strong>04 | </strong>Externalization, industrialized</h3><p>Nonaka understood that a company&#8217;s real competitive edge comes from <em>externalization</em> &#8211; the ability to convert human intuition and craft into something others can use. In 1991, this was done through language and human dialogue.</p><div class="pullquote"><p><em>&#8220;The three terms capture the process by which organizations convert tacit knowledge into explicit knowledge: first, by linking contradictory things and ideas through <strong>metaphor</strong>; then, by resolving these contradictions through <strong>analogy</strong>; and, finally, by crystallizing the created concepts and embodying them in a <strong>model</strong>, which makes the knowledge available to the rest of the company.&#8221;</em></p><p>- Ikujiro Nonaka, The Knowledge-Creating Company (1991)</p></div><p>For Nonaka, externalization was always gated by human bandwidth &#8211; the slow work of conversation, management, and mentorship.</p><p>But AI removes that constraint. The products discussed in this piece take the raw, unstructured mess of how people work and convert it into semantic meaning.</p><p>It's externalization, industrialized.</p><p>The products that win today won&#8217;t succeed with "better AI" or "better agents." They will develop a new sensory layer for work &#8211; one that observes what no system has ever recorded, translates it into meaning machines can read, and allows that knowledge corpus to build organically over time.</p><p>The companies that get this architecture right won&#8217;t just survive the next model release. <strong>They&#8217;ll get stronger from it.</strong></p><p>They'll bring the dark matter of work into the light.</p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>Git</em> is an open-source version control system designed in 2005 (the company Github is a collaboration layer built on top of this framework). Git stores code as blobs (raw text of code with no metadata), organized into trees (folder hierarchies) and commits (snapshots of the entire repo), all linked in a directed acyclic graph (DAG) that records how the codebase evolves. Every commit points to its parent, enabling Git to trace parallel branches to their common ancestor and reconcile textual differences line-by-line.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>An <em>IDE (integrated development environment)</em> is the application where software developers write, edit, and test their code. This includes tools like VS Code or Cursor.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>A <em>commit</em> is a snapshot of changes saved to a codebase &#8212; think of it as a developer pressing &#8220;save&#8221; on a batch of work, with a short note describing what changed.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>Robotic process automation (RPA)</em> is a category of software that automates repetitive, rule-based tasks like data entry, form filling, and moving information between systems.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><em>Distillation</em> is a technique where a smaller model is trained to replicate the behavior of a larger, more capable model, compressing its knowledge into a form that's cheaper and faster to run.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><em>Quantization</em> is a technique that compresses a model&#8217;s numerical precision &#8211; e.g., from 32-bit to 4-bit values &#8211; dramatically reducing its memory footprint and compute requirements while preserving most of its performance. It&#8217;s one of the key techniques enabling large models to run on consumer hardware.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Agents Need to Fail Better]]></title><description><![CDATA[A Conversation with Yutori co-founder Dhruv Batra]]></description><link>https://www.thetimes.blog/p/agents-need-to-fail-better</link><guid isPermaLink="false">https://www.thetimes.blog/p/agents-need-to-fail-better</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Fri, 06 Feb 2026 15:21:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!b9e6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>January was a brutal month for software.</p><p>Overall, public SaaS is down <a href="https://finance.yahoo.com/news/traders-dump-software-stocks-ai-115502147.html">15%</a> year-to-date. But the pain isn&#8217;t evenly distributed. Companies built around UIs, forms, and human workflows are feeling it the most.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qYCq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qYCq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 424w, https://substackcdn.com/image/fetch/$s_!qYCq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 848w, https://substackcdn.com/image/fetch/$s_!qYCq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!qYCq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qYCq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png" width="1456" height="991" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:991,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:139228,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/187023780?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qYCq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 424w, https://substackcdn.com/image/fetch/$s_!qYCq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 848w, https://substackcdn.com/image/fetch/$s_!qYCq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!qYCq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd706ed58-b7b6-4e0d-bb87-400e2039dab7_2115x1440.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The markets are pricing in a &#8220;<a href="https://www.fabricatedknowledge.com/p/the-death-of-software-20-a-better">SaaS-pocalypse</a>.&#8221;</p><p>Agents, the story goes, will execute workflows on demand, spinning up interfaces and interacting directly with data. In that world, the only enduring value in software are <em>(i) agents</em> &#8211; and <em>(ii) the</em> <em>data and infra layers</em> that feed them. Pre-built software applications will become <a href="https://finviz.com/news/291247/ai-agents-not-vibe-coding-will-define-future-of-software-says-openai-chair">obsolete</a>.</p><p><strong>That story assumes agents will work reliably. Right now, most don&#8217;t.</strong> They&#8217;re still brittle. They lose context. They don&#8217;t recover well from mistakes.</p><p>There&#8217;s a branch of computer science called embodied AI<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> that offers a different way to think about how agents should be built, and the infrastructure they need to operate effectively.</p><p>Embodied AI is built on the insight that intelligence can&#8217;t exist in a vacuum. Brains don&#8217;t connect directly to their surroundings. They need a <em>body</em> to mediate between raw cognition and the external world. Intelligence, in this view, emerges from continuous-loop interaction between an agent, its physical form, and its environment.</p><p><strong><a href="https://dhruvbatra.com/index.html">Dhruv Batra</a> is a leader in embodied AI, with a deep understanding of how machines can be built to navigate complex, dynamic environments.</strong></p><p>Dhruv was previously a Research Director at Meta&#8217;s FAIR (Fundamental AI Research), where he spearheaded the development of <a href="https://aihabitat.org/">Habitat</a> (the fastest 3D simulator for training virtual robots) and the team that built the AI assistant integrated into <a href="https://www.meta.com/ai-glasses/">Meta Ray-Ban SmartGlasses</a>.</p><p>Before that, he was an Associate Professor at Georgia Tech, working at the intersection of computer vision, machine learning, natural language processing, and robotics.</p><p>Today, Dhruv is co-founder and Chief Scientist at <a href="https://yutori.com/scouts">Yutori</a>, a new startup rethinking web agents from the ground up.</p><p>Alongside a team of researchers from Meta, Tesla, Palantir, and Google, his bet is that the web should be treated less like a database to parse and reason over, and more like a physical environment &#8211; chaotic, visual, dynamic.</p><p>I sat down with Dhruv to explore the future of agentic software through the lens of embodied AI &#8211; how these agents need to be built and the gaps that remain if these products are to fulfill their promise.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b9e6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b9e6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 424w, https://substackcdn.com/image/fetch/$s_!b9e6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 848w, https://substackcdn.com/image/fetch/$s_!b9e6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 1272w, https://substackcdn.com/image/fetch/$s_!b9e6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b9e6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png" width="490" height="261.9942196531792" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:370,&quot;width&quot;:692,&quot;resizeWidth&quot;:490,&quot;bytes&quot;:170457,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/187023780?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b9e6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 424w, https://substackcdn.com/image/fetch/$s_!b9e6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 848w, https://substackcdn.com/image/fetch/$s_!b9e6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 1272w, https://substackcdn.com/image/fetch/$s_!b9e6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d77ef6-b705-4d18-aa11-ff60c063f892_692x370.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>01 |  </strong>Bits move faster than atoms</h3><p><strong>EO: You spent a good chunk of your career in embodied AI, teaching machines to perceive and act in real-world environments. Now, you&#8217;re focused on web agents that live, at least today, in the digital realm. Tell me about that journey.</strong></p><p>DB: I actually started my career in computer vision and machine learning. A decade ago, I co-authored a <a href="https://arxiv.org/pdf/1505.00468">paper</a> that outlined early methods for chatbots to answer questions about images, which won the Everingham Prize last year for stimulating a new strand of vision and language research.</p><p>But five or six years ago, I grew dissatisfied with chatbots. I wanted them to <em>do things</em> in the world. That took me into reinforcement learning, then robotics.</p><p>I&#8217;m not a roboticist by training. So I earned my credibility, doing things like supervising PhD students in the field and winning a best paper award at a robotics conference.</p><p><em>I don&#8217;t see physical robots arriving before digital agents.</em></p><p>Bits move faster than atoms. They&#8217;re easier to clone and iterate on. So I took all that learning and started Yutori.</p><p><strong>Why did you land on web agents specifically?</strong></p><p>Our thesis is that the web is going to fundamentally change. Over the next decade, probably sooner, no human should have to operate a browser. Why click buttons or fill out forms when machines can handle it?</p><p>In 2025, we saw real progress with coding agents, but no digital assistants that could automate entire browser workflows. The models weren&#8217;t there yet, but we knew it was coming. We saw a window of opportunity.</p><p><strong>So where do current web agents fall short? In my experience, Claude&#8217;s computer use and OpenAI Operator are getting pretty good.</strong></p><p>They are improving. But our browser-use model, <a href="https://yutori.com/blog/introducing-n1">n1</a>, outperforms Opus 4.5, GPT 5.2, and Gemini 2.5.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> (Gemini 3 computer use benchmarks aren&#8217;t out yet, so that comparison isn&#8217;t possible.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fMpV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fMpV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 424w, https://substackcdn.com/image/fetch/$s_!fMpV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 848w, https://substackcdn.com/image/fetch/$s_!fMpV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 1272w, https://substackcdn.com/image/fetch/$s_!fMpV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fMpV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:32658,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/187023780?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fMpV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 424w, https://substackcdn.com/image/fetch/$s_!fMpV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 848w, https://substackcdn.com/image/fetch/$s_!fMpV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 1272w, https://substackcdn.com/image/fetch/$s_!fMpV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b8a11f9-0e71-45f7-b0e4-18dd08908397_1920x960.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Why is our model outperforming? <em>Better perception-action feedback loops<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></em> &#8211; something I learned from robotics.</p><p>Historically, most AI models were pre-trained on static data &#8211; text and images from the web. That&#8217;s a good foundation, but it doesn&#8217;t teach the model how to <em>act</em> or <em>recover</em> when things go wrong.</p><p>This is a known problem in robotics. Mistakes compound. If your robot slips on ice, it&#8217;s now in a body configuration it hasn&#8217;t been trained on, and things can spiral. And when there&#8217;s hardware involved, compounding mistakes are costly. So we train robotic systems to course correct when an action doesn&#8217;t produce the expected result.</p><p>When we started Yutori, that kind of perception-action feedback data was underrepresented in how most web agents were trained. Without that, agents don&#8217;t learn to recover.</p><p><strong>So how do you train agents to recover from mistakes, to fail better?</strong></p><p>Through <em>post-training</em> &#8211; supervised fine-tuning (SFT)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> and reinforcement learning (RL)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> on both synthetic and real websites.</p><p>In robotics, you have to choose your training approach depending on the task. For locomotion, RL works well because the physics relevant to the tasks transfer over. An angle is an angle &#8211; a robot&#8217;s leg should bend the same way in simulation as in the real world.</p><p>But for vision-based tasks like picking up objects, camera input is tied to the environment. Every room has different lighting, every surface reflects differently. You can&#8217;t simulate all of that, so you rely on human demonstrations that the robot learns to copy.</p><p>Web agents don&#8217;t have those constraints. The sim-to-real gap<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> is small in the digital world, and training on live websites is safe and low risk.</p><p>That means we can do SFT based on demonstrations, showing the agent what to do when a button doesn&#8217;t load or a page renders incorrectly. <em>And</em> we can do RL on real websites, rewarding successful actions and recovery.</p><p>That combination teaches our agents resilience and persistence.</p><p><strong>Is there something in your approach &#8211; like your multi-agent orchestration &#8211; that big companies like Anthropic won&#8217;t replicate, because they think differently about how agents should work?</strong></p><p>First off, the foundation labs absolutely <em>can</em> deploy these techniques. In AI, there are no secrets that can be kept forever. The question is &#8211;<em> will they?</em> They&#8217;re fighting so many battles at once. The advantage we have as a startup is focus.</p><p>That said, our agents are very specialized for navigating the web.</p><p>Our first product, <a href="https://yutori.com/blog/building-the-proactive-multi-agent-architecture-powering-scouts">Scouts</a>, is essentially Google Alerts for the AI era. You tell it &#8220;let me know when X happens&#8221; &#8211; a price drop, a reservation opening up, a company announcing a raise &#8211; and it monitors for that continuously.</p><p>A key design choice we made is that users shouldn&#8217;t have to specify sources, to tell the AI where to look. The system needs to search wide and deep on its own, which means training them to integrate with APIs and third party tools across the web.</p><p>&#8203;&#8203;But here&#8217;s the problem &#8211; if you give a single agent more than 100 APIs to call from, it starts thrashing.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> It loses track of what it&#8217;s doing.</p><p>So we use a hierarchical structure. There&#8217;s an orchestrator agent, with access to the full user prompt. That orchestrator directs sub-agents, each specialized for different tool categories. For example, we have a sub-agent for social media, one for news sources, and so on. Each manages a smaller context.</p><p>Claude Code and Manus have announced something similar. Teams are arriving at the same discovery &#8211; <em>without careful context management, performance degrades.</em></p><p><strong>You mentioned the need to train agents to &#8220;integrate with APIs across the web.&#8221; But most sites don&#8217;t have APIs. How do you program your agents to handle this long-tail of the Internet?</strong></p><p>Right. Most websites do not have APIs agents can read from.</p><p>For those, we found it&#8217;s <a href="https://yutori.com/blog/the-bitter-lesson-for-web-agents">better</a> for our agents to interact with pixels rather than parse the underlying HTML or JavaScript.</p><p>We first approached this as a coding problem. Websites are just code, and so we thought the logical approach is for agents to parse the back-end. But we quickly learned that websites are too heterogeneous. They&#8217;re programmed in wildly different ways, with hidden hacks on the back-end.</p><p>It turns out the best source of truth is what the human actually sees, the pixels.</p><p><strong>That&#8217;s counterintuitive, since the back-end contains </strong><em><strong>more</strong></em><strong> information than what&#8217;s displayed on the screen.</strong></p><p>There&#8217;s an analog here to Waymo versus Tesla.</p><p>Waymo uses cameras, radars, and LiDAR<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> to create a rich, 3D map of the environment. Tesla uses cameras only. Tesla&#8217;s argument is that vision is sufficient and scales better, since roads were already designed for humans to navigate visually.</p><p>We&#8217;re taking the Tesla approach. For them, it is largely about being more cost effective. For us, it&#8217;s about managing the litany of edge cases that appear across the web.</p><p>Parsing the back-end <em>seems</em> like more information, but it creates more problems than it solves. What&#8217;s rendered to the screen follows visual conventions that humans expect, and those are far more stable.</p><p>In this case, the <em>right</em> representation matters more than the one with the most data.</p><div><hr></div><h3><strong>02 |  </strong>Context is everything</h3><p><strong>Can you touch more on persistence? Your agents are designed to run continuously, for months. I imagine that could get unwieldy if not designed well.</strong></p><p>Our product Scouts is unique in that it is &#8220;always on,&#8221; built for proactive monitoring. That comes with trade-offs. If agents are constantly going out into the world, your costs go up. And you can ruin the user experience with too many or repeat notifications.</p><p>The agent needs to know whether it&#8217;s already told the user about something. That means remembering everything you told the user in the past.</p><p>Five or six years ago, you&#8217;d solve that problem with embeddings.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> But we saw how fast model capabilities were evolving, and designed around retrieval instead. We write everything we&#8217;ve told the user into a file system, and have a custom agent that runs searches against those files.</p><p>Models are good enough to now grep and diff<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> against previous reports. It&#8217;s easier to hand context retrieval over to an LLM rather than trying to architect a vector database.</p><p><strong>How do you manage that at scale? If a Scout has been running for a year, that&#8217;s a lot of context.</strong></p><p>This is genuinely new territory. I&#8217;ve had agents running for ten, eleven months. Such long-running, always-on agents have never been built before.</p><p>We built our memory architecture in-house. No open-source frameworks.</p><p>Model capabilities are improving fast enough that these frameworks end up crippling the product. A key part of our product value boils down to very careful context management.</p><p><strong>How do you think about the right way to design a context graph,</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a><strong> one that scales and that both meets the capabilities of where agentic systems are today, but also will evolve as models improve?</strong></p><p>This is something we think about a lot at Yutori.</p><p>Today, context windows<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> are the main constraint dictating how AI systems get built. For example, a common failure with MCP<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> tool calls is if the response is too verbose, it floods the context window and the agent can&#8217;t perform its task. Some harnesses<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> now write longer responses to a file for later reference, instead of consuming them directly.</p><p>But that problem compounds for agents running 24/7 for months.</p><p>File systems are the best we have right now &#8211; write context to files, let the agent search and summarize as it goes.</p><p>The naive view is that context windows will just keep scaling. But biological systems suggest another path.</p><p><strong>What do you mean?</strong></p><p>I&#8217;ll caveat that biological systems are not perfect analogs for agentic systems.</p><p>But what I mean is &#8211; mammals don&#8217;t have infinite context. When they sleep, they&#8217;re compressing memory. Distilling the day&#8217;s experience, converting System 2 into System 1 thinking.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> There&#8217;s a reason beginners in sports are told to get sufficient sleep. They&#8217;re converting new techniques into muscle memory.</p><p>I don&#8217;t think simple scaling of context windows will get us to years of runtime context. We&#8217;ll have to conduct new research around distillation and continual learning. It probably looks like new mechanisms for automatically updating model weights, converting accumulated context and experience into more efficient representations.</p><p>Unfortunately, I don&#8217;t yet know how to build systems that anticipate that solution.</p><p><strong>Maybe there&#8217;s a startup opportunity somewhere in there.</strong></p><div><hr></div><h3><strong>03 |  </strong>The fate of the application layer</h3><p><strong>Today, the market is betting that agents will render the application layer obsolete. When an agent can generate a workflow on demand &#8211; pulling in the requisite data, executing the task, building the interface that lets humans read the output &#8211; workflow SaaS products like Salesforce or Adobe lose their reason to exist. How do you see the software stack evolving?</strong></p><p>I posted a tongue-in-cheek <a href="https://x.com/DhruvBatra_/status/2014036895987278225">tweet</a> recently where I wrote that maybe coding is just caching for LLM inference.</p><p>What I mean is, when you write a software program, you&#8217;re really saving down a solution &#8211; a codified way to solve a problem. Code is therefore a stored solution. And if intelligence is the ability to solve problems, then, in the AI era, <em>code is just stored intelligence.</em></p><p>Historically, we built generic apps like Salesforce or Fitbit because it was expensive to solve each person&#8217;s problem individually. You build one solution and ship it to millions.</p><p>But if LLMs make it cheap to read and reason over code, you can generate custom solutions on demand &#8211; hyper-personalized apps, built on the fly.</p><p><strong>Does that imply we </strong><em><strong>are</strong></em><strong> heading toward the collapse of business and consumer apps?</strong></p><p>It&#8217;s possible. The models will get good enough. But the tradeoff comes down to two things.</p><p>First, efficiency. It&#8217;s wasteful to regenerate the same code for millions of people. It&#8217;s economically efficient for someone to own the shared layer, shared libraries and components.</p><p>Second, design. Not everyone has product taste. Good software encodes good design decisions. That expertise gets packaged into specialized apps &#8211; tools that do something well, presented clearly. You can&#8217;t generate everything on the fly. Humans still need curation.</p><p><strong>Can I offer a third? Network effects. You&#8217;ll still want users collaborating across the same plane of data, context, and reusable functions. Complete customization, all the way down the stack, makes that hard.</strong></p><p>Yes. Any one user may not have the autonomy, full picture, or authority to take the software in arbitrary directions.</p><p><strong>And if that&#8217;s the case, it seems like context management &#8211; especially </strong><em><strong>shared</strong></em><strong> context management &#8211; becomes the next layer of value.</strong></p><p><strong>Maybe the &#8220;application layer&#8221; becomes middleware for agents &#8211; something that stitches together system state, builds a context graph for how work gets done, encodes valid operations, and provides a control plane for human oversight.</strong></p><p><strong>Not the agent itself, not the data layer. But specialized software that helps agents traverse and collaborate in specific digital environments.</strong></p><p>That framing makes sense to me.</p><div><hr></div><h3><strong>04 |  </strong>AI&#8217;s novelty problem</h3><p><strong>Do you think there will be two categories of agents? One for constrained environments like enterprise software, and one for the open web?</strong></p><p>I lean toward no. The open web is the harder problem. If you can handle that, a constrained environment is just a narrower, more specified version. You tell the agent the rules and constraints, and it should be able to adapt.</p><p>You see this with coding agents. If you can operate across many different coding environments, specialized ones typically aren&#8217;t a problem &#8211; as long as they&#8217;re represented in the training data.</p><p><strong>That caveat &#8211; &#8220;as long as they&#8217;re in the training data&#8221; &#8211; seems important.</strong></p><p>Yes. The AI field has basically decided we don&#8217;t need to solve generalization.</p><p><strong>What do you mean?</strong></p><p>In machine learning, <em>generalization</em> means the system is capable of running inference on information outside the training distribution the AI has been exposed to.</p><p>When I taught Intro to Machine Learning, I introduced generalization as the goal of AI systems on day one. The whole point was building systems that could extrapolate beyond their training data.</p><p>But as the field commercialized, we moved away from that. We&#8217;ve implicitly decided we will just make the training distribution wide enough that everything we will see at inference is something we&#8217;ve already seen in training.</p><p><strong>Do you think that holds the field back?</strong></p><p>Not necessarily. Scaling up has led to commercial success of AI models, which funds investment and progress. But I do wonder about the implications.</p><p>If you give up on generalization, can AI ever develop <em>truly</em> novel solutions? Can it solve problems that no human has ever solved before? Because by definition, those solutions won&#8217;t show up in the training data.</p><p><strong>That seems like it could be a limitation.</strong></p><p>I don&#8217;t know the answer. It&#8217;s possible that wide enough training data effectively gets us there. Maybe we&#8217;ll find out soon, it&#8217;s an exciting time.</p><div><hr></div><h3><strong>05 |  </strong>Bots aren&#8217;t all bad</h3><p><strong>Open standards like TCP/IP, HTTP, OAuth were the connective tissue of the old web. They let disparate software systems communicate. We&#8217;re now <a href="https://www.thetimes.blog/p/agents-are-learning-to-talk">seeing</a> the first agentic standards emerge &#8211; MCP for tool access, UCP for commerce. Are there any gaps you still see?</strong></p><p>From an advancement of science perspective, I&#8217;ve generally been a proponent of building as much in the open &#8211; open weights, open standards &#8211; as your economic constraints allow.</p><p>At Yutori, for example, our n1 model is post-trained from the Qwen<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> family of models. I&#8217;d love to see more American open weights and open-source models out there.</p><p>But one open standard I&#8217;d like to see is a <em>permission layer for agents.</em></p><p>In the future, I imagine everyone will have one or several digital assistants that shadow them, with read access to most things and configurable limits on write access &#8211; i.e., don&#8217;t purchase above this amount, don&#8217;t reach out to these people directly.</p><p>Today, it&#8217;s mostly all-or-nothing. You either give an agent full access or no access. OAuth exists for users, but there&#8217;s no equivalent for agents.</p><p><strong>What about agent-to-agent coordination? In closed environments, MCP and A2A are making real progress. But when it comes to decentralized or trustless environments like the open web, is there a gap?</strong></p><p>Right now, many website owners still operate from the mindset that automated traffic is bad. As a security measure, if they detect automated traffic, they try to block it.</p><p>The web needs to get more nuanced. The world is moving toward agents being the primary drivers of digital action. When an agent makes a request, it should be able to declare &#8211; here&#8217;s who I am, here&#8217;s who I&#8217;m operating on behalf of, here&#8217;s my intent.</p><p>There are commercial questions to sort out too. Is the agent paying for access? Or is the website advertising to the agent? But those are secondary.</p><p><em>The first step is a mindset shift.</em> We need to move past the assumption that bots are bad and toward a model where agents can identify themselves and transact legitimately.</p><div><hr></div><h3><strong>06 |  </strong>The AI-human interface</h3><p><strong>Right now, users consume the outputs from your web agents through traditional channels like a web browser or email. In 5-10 years, what do you think the human-AI interface will look like?</strong></p><p>My perspective on this question is heavily influenced by my time at Meta.</p><p>The team I led there was called FAIR Embodied AI. One of my teams built an image question-answering model that shipped on Ray-Ban Meta glasses, in collaboration with the product team.</p><p>I think there are a unique set of constraints that point to glasses as the right form factor.</p><p>They&#8217;re socially acceptable and lightweight. The world is already laid out for humans, so that&#8217;s the vantage point you&#8217;d want. And the experience I imagine AI should deliver is &#8211; a superintelligent assistant, watching me live my life, proactively and discreetly intercepting when you need it. Glasses fit into that.</p><p>Of course, there are a lot of ways that can go wrong. Too proactive, and you end up annoying the user. Not careful about privacy, and you end up in nightmarish scenarios; too careful, and you cut off necessary context.</p><p>But there is a magical balance great products can strike.</p><p><strong>Is there a form factor that you would short?</strong></p><p>Earbuds with cameras. I always found those a bit amusing. I never saw that working.</p><p><strong>The eyes don&#8217;t belong where the ears should be?</strong></p><p>Yeah, but also, you know, people have hair that gets in the way. It&#8217;s just not going to work.</p><p><strong>For all the folks building in AI wearables, remember &#8211; people have hair. We&#8217;ll leave it with that insight. Thank you Dhruv.</strong></p><p>Thanks for having me.</p><p></p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>Embodied AI</em> focuses on agents or robots that operate autonomously in physical or simulated environments and that respond to dynamic conditions in real-time. This is distinct from AI systems like LLMs that process static inputs, such as images or text.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>Opus 4.5</em> is the flagship model from Anthropic. <em>GPT 5.2</em> is the frontier reasoning model from OpenAI. <em>Gemini 2.5</em> is the multimodal &#8220;thinking&#8221; model series from Google DeepMind.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>Perception-action feedback loops</em> refer to the cycle where an agent observes its environment (perception), takes an action, and then observes the consequences of that action to inform its next move. Early foundation models were trained on static data &#8211; screenshots and text &#8211; but not on the sequential, interactive data of taking actions and helping the agent learn from their consequences.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>Supervised fine-tuning (SFT)</em> is a post-training technique where you show the model examples of correct behavior &#8211; in this case, examples of successful web navigation &#8211; and teach it to imitate that behavior.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><em>Reinforcement learning (RL)</em> is a technique that trains a model through trial and error. The model takes actions, receives feedback on whether they succeeded or failed, and adjusts its behavior accordingly. Unlike SFT, the model learns from its own experience rather than curated examples.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><em>Sim-to-real gap</em> refers to the difference between how an AI system performs in a simulated environment versus the real world. Policies learned in simulation often fail to transfer because simulators don&#8217;t perfectly replicate real-world conditions.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><em>Thrashing</em> is a failure mode in agentic systems where the agent spends so much overhead managing tools and context &#8211; tracking which tools are available, what&#8217;s been tried, what results have or have not come back &#8211; that it can&#8217;t make progress on the actual task.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><em>LiDAR</em> is a sensing technology used in autonomous vehicles and robotics that uses laser pulses to measure distances and build a 3D map of the surrounding environment.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><em>Embedding</em> is a technique for converting text into numerical vectors that capture semantic meaning, allowing systems to find similar content by comparing vectors (rather than matching exact words). This used to be the standard approach for building memory systems in agents &#8211; you&#8217;d store past outputs as embeddings in a vector database and retrieve similar ones to check for duplicates.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p><em>Grepping</em> and <em>diffing</em> are basic command-line operations for working with text. Grep searches files for specific patterns; diff compares two files and shows the differences. Here, the LLM uses these operations to search past reports and check what&#8217;s changed or what information is new.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>A <em>context graph</em> is a structured representation of the knowledge an agent needs to operate effectively &#8211; the relationships, rules, and logic that define how things work. In enterprise software, this might be the tacit knowledge of how an organization operates (workflow steps, permissions, templates, etc). In a web agent, it might be the history of past actions, user preferences, and tool outputs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>A <em>context window</em> is the amount of text an AI model can process at once, essentially its working memory. Most frontier models today have context windows ranging from 128k to 1 million tokens, roughly equivalent to 100 to 750 pages of text.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p><em>MCP (Model Context Protocol)</em> is an open standard developed by Anthropic that allows AI agents to connect to external tools and data sources through a unified interface.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>A <em>harness</em> is the wrapper or framework around an agent that controls how the AI operates &#8211; managing tool calls, handling responses, and deciding what goes into the model&#8217;s context window.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p><em>System 1</em> and <em>System 2</em> thinking is a framework in psychology for two modes of thinking. System 1 is fast, automatic, and intuitive, like recognizing a face or catching a ball. System 2 is slow, deliberate, and effortful, like solving a math problem or learning a new skill. With practice, System 2 processes can become System 1. What once required conscious effort becomes instinct.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p><em>Qwen</em> is a family of open-weight models developed by Alibaba. &#8220;Open-weight&#8221; means the model parameters are publicly available, allowing other companies to use Qwen as a base and fine-tune it for specialized applications.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Erosion of Rented Intelligence]]></title><description><![CDATA[A Conversation with Fireworks Co-founder Benny Chen]]></description><link>https://www.thetimes.blog/p/the-erosion-of-rented-intelligence</link><guid isPermaLink="false">https://www.thetimes.blog/p/the-erosion-of-rented-intelligence</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Sat, 03 Jan 2026 15:08:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NC38!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a paradox in Silicon Valley.</p><p>The industry celebrates contrarian thinking. Yet so often, the underlying mechanics &#8211; social proof, the pace of capital deployment, the pull of the power law &#8211; bias toward consensus.</p><p><a href="https://x.com/the_bunny_chen">Benny Chen</a> strikes me as an exception. A founder who reasons from first principles and follows his conviction, independent of market sentiment.</p><p>Three years ago, Benny and several members of the team behind PyTorch &#8211; one of the most widely used frameworks for building neural networks &#8211; left Meta with the insight that enterprises would want to <em>own</em> their AI, not <em>rent</em> it<em>.</em></p><p><strong>In 2022, most did not see the world this way.</strong></p><p>The prevailing wisdom said frontier AI capabilities required incredible scale and deep pockets. Most assumed the next generation of AI applications would be built on closed, general-purpose models from labs like OpenAI and Anthropic.</p><p>Benny and the team believed something different, that companies would want AI they could more directly customize and control.</p><p>So they built <a href="https://fireworks.a">Fireworks</a>, a platform that lets companies take open-source AI models,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> train them on their own data, and deploy them with speed and efficiency.</p><p>The world is now catching up. DeepSeek proved open-source models could compete at the frontier. Today, open models deliver <a href="https://download.ssrn.com/2025/11/18/5767103.pdf?response-content-disposition=inline&amp;X-Amz-Security-Token=IQoJb3JpZ2luX2VjEP%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIQCe1uRKMfYif%2FrzEZu3yFJYhk4RxLjMqs4CAG%2BCA8fZhgIgF5JuIfOTawmFr7ikPkhQiH%2Bwgxy9CjIY9PLkzdMIDQcqxgUIx%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARAEGgwzMDg0NzUzMDEyNTciDEyg926nQgRnc2aWLSqaBRrG7PpjXkpL8VBrm59QvcsbfKoEgGR7iaOcbQs1M8IUX2cRYvoCAsiNKC5hTC6VLerNmPy%2FMQGdvfrYwo3P9XeMo%2FRMSlLBvD3V85IH4PNbDzenZ0JRcCTzBRkPCZ5VZGWTsGWUpW4mW6tMzJ1g0CLhsxGYOTJHbgglQunOop1OB7g0wXghZBZbjoCFjuS5Dr9JVYkhT%2B7p9zsuUt%2FDP1l3%2BNLbWOcj3g%2BekD5fW6EhtxasJBjVUgnl5YmDEMgM7MtTFwdBF6ZAjZS%2FO1OJK6LuBzt8plujvqsZxIuZcXUAulzRN3QN%2FZQbJhoKtspv5PyPRRn02QBUXHcShbwgwnXT%2BYisBly2xrwUliGk5qdhvWEs0PyT8j%2ByLhoywx%2BjVgwGKvLIQ3XJkt8PTZsdwp5LjqgU9W897RpgMNsOIQ2a%2BuomnK2EXtRniDJBwCUXdSeF7TeAYAcrzNj1VG1lHxCgokzbtz9syZ7sN%2BzBulaEjGpQvljD3QCd4AJQrtZ9jq1AcpKknF4tzHa4gjyEVSExnujPGNTqFT1cqfOiu%2FQk4s85Xb2KseQ5JR0Ul2cWmE9F13x5xHHHLxYeTwnU0MaAXEoSRMUTSJLklxuloN7mRl81wzYEBsbkE9c%2B3p2CoZbc5HSv%2BilUgDpOuPYEv9PgSZk8TAT588EYKrLGeb2FXruFrTCkNK8VKOcFJdUsvpn55%2FlDn%2FtOdv5%2Ba2DCugzcYVshXzy3LeQaUO7giyLCA%2BFSCrhtKMzUE0tt67FkcJdwVHC0QZND5Z9o9j%2BpdhOiXYufZIyfjt74F5%2B1O5sCFmqXpjJ%2FdIR%2BVDfvaVRtvKbZHFxV0wKxFzq3JS8R2rPgZp0eNMTp0oJx5uWsWIW%2FGq%2B63t5DSYAheDCdweLJBjqxARQmUQsSrSH7yDRndcNaUsi%2FCRjPzbm1K3FZQXhWBRTl%2FUkzM2csA%2BxEv7Tr%2FlOZFTmB8kFmgKq40VSMb2iLttzzA0UxB3k9SgDuCXV3MclPnaEjfxelqVhrC3AncCibli2R92nHNdLlBLc8FPYUJdBLFylRBALw1AD%2F2tn5%2BGriPZIRDqfnpHj8DcAF9vosyl6Y519uvI78mLFrbxr7pTW%2Bn%2BzfQ3y2VFgGxE91i%2Bzuzg%3D%3D&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Date=20251209T225251Z&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Expires=300&amp;X-Amz-Credential=ASIAUPUUPRWE2HT4NVXC%2F20251209%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Signature=bac8806873b200e06a45f983e1151b1ad63e867d74f27300a7fb684f35b18220&amp;abstractId=5767103">~90%</a> the performance of closed-models &#8212; at <em>one-sixth</em> the cost.</p><p>Fireworks is riding this wave. The company processes <a href="https://www.linkedin.com/feed/update/urn:li:activity:7401304105320873985/">13T</a> tokens <em>daily</em> (the same <a href="https://blog.google/inside-google/message-ceo/alphabet-earnings-q3-2025/">scale</a> as Gemini&#8217;s developer API) for over 10,000 customers, including Uber, Shopify, and Notion. This October, three years after launch, they crossed <a href="https://fireworks.ai/blog/series-c">$280 million</a> in annualized revenue.</p><p>I sat down with Benny to discuss:</p><ul><li><p>Why he believes demand for open-source AI is about to hit an inflection point.</p></li><li><p>How enterprises can exceed frontier model performance &#8211; with only 100 rows of data.</p></li><li><p>Predictions for AI in 2026.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NC38!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NC38!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 424w, https://substackcdn.com/image/fetch/$s_!NC38!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 848w, https://substackcdn.com/image/fetch/$s_!NC38!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 1272w, https://substackcdn.com/image/fetch/$s_!NC38!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NC38!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png" width="404" height="216.01156069364163" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:370,&quot;width&quot;:692,&quot;resizeWidth&quot;:404,&quot;bytes&quot;:168039,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/183215476?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NC38!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 424w, https://substackcdn.com/image/fetch/$s_!NC38!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 848w, https://substackcdn.com/image/fetch/$s_!NC38!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 1272w, https://substackcdn.com/image/fetch/$s_!NC38!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75d0b06b-0321-4009-8694-1fb5620e08d3_692x370.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><h3><strong>01 | </strong>Frontier models <strong>&#8800;</strong> frontier capabilities</h3><p><strong>EO: Tell me about the decision to launch Fireworks.</strong></p><p><strong>Back in 2022, many people assumed that the future belonged to the big, closed model labs. Billion-dollar investments in training and compute were considered the price of admission.</strong></p><p><strong>What gave you the conviction to go the other direction, to build a company around open-source?</strong></p><p>BC: Focusing on open-source and inference<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> may have been contrarian to others. But to us it was very clear.</p><p>When we started, there was no ChatGPT. But we&#8217;d supported a lot of internal AI models in production at Meta, making sure workflows scaled economically.</p><p>Because of that experience, we believed <em>most value in AI would come from inference.</em></p><p>We decided to build a company that helps developers deploy best-in-class AI <em>profitably</em>, and then we take a cut. That, we believe, is a more sustainable business model than raising huge amounts of money to come up with frontier models.</p><p>I think Sam Altman once <a href="https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/">said</a> artificial general intelligence (AGI) could capture the &#8220;light cone of all future value&#8221; &#8211; that OpenAI is playing to capture all of humanity&#8217;s future economic value.</p><p>That&#8217;s an interesting proposition &#8211; that if you have a monopoly, no one else can ever figure out how to take it. I don&#8217;t think that&#8217;s ever been true in the history of capitalism. If there&#8217;s enough profit, someone will figure out how to compete.</p><p><strong>But the predominant narrative is you have to raise a lot of money to stay at the frontier, and that developers will always demand frontier capabilities for their products to stay competitive.</strong></p><p><strong>It sounds like you believe something different.</strong></p><p>I think &#8220;open-source&#8221; is just a fancy term for &#8220;sharing the burden&#8221; &#8211; a bunch of people opening up what they did so others can replicate it without making the same mistakes. And if you have tens of thousands of GPUs running in the background, mistakes are very costly.</p><p>Eventually, we believe people will act rationally. And sharing the burden for these costly endeavors is normally the rational thing to do.</p><p><strong>By &#8220;act rationally,&#8221; you mean stop focusing as much on model capability, and more on cost?</strong></p><p>Absolutely. <em>One day, reality will hit. Cost will eventually become important.</em> Everyone will have to figure out how to stay in business under those constraints. If you&#8217;re honest about that on day one, your business model will be much cleaner in the long run.</p><p>The other part of your question was whether enterprises will demand frontier capabilities. Yes, they will. But just because they need frontier <em>capabilities</em> does not mean they need frontier <em>models</em>.</p><p>Fireworks works with startups like <a href="https://www.genspark.ai/">Genspark</a> to build custom evaluation suites,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> then use our training platforms &#8211; we call them reinforcement fine-tuning (RFT)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> platforms &#8211; to plug those evaluations in. This gives our customers frontier AI capabilities, but with open-source models.</p><p>When we do that &#8211; when we set up the data flywheel properly &#8211; &#8220;frontier model&#8221; and &#8220;frontier capability&#8221; are decoupled.</p><p>We believe that&#8217;s more sustainable. Because even if frontier models can see the traffic you&#8217;re sending them, they don&#8217;t know how the developer defines success for their specific use case.</p><p>That production data feedback loop &#8212; knowing what success looks like and how your product performs against that &#8212; <em>that&#8217;s</em> <em>something application developers have that the model labs never will.</em></p><p>We are helping the builders who have that data.</p><div><hr></div><h3><strong>02 | </strong>&#8220;Open-source models are about to take over.&#8221;</h3><p><strong>Help me make sense of the market as it pertains to open- vs closed-source.</strong></p><p><strong>On the one hand, open-weight models offer <a href="https://epoch.ai/data-insights/open-weights-vs-closed-weights-models">near parity</a> on raw performance, at a fraction of the cost. For example, customers who move to your platform see an <a href="https://fireworks.ai/blog/series-c">8x</a> average cost reduction.</strong></p><p><strong>So the </strong><em><strong>value</strong></em><strong> is there, but </strong><em><strong>adoption</strong></em><strong> less so.</strong></p><p><strong>In aggregate, these models only power <a href="https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/">11%</a> of total workloads &#8211; and that&#8217;s </strong><em><strong>down</strong></em><strong> from 19% last year.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZF86!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZF86!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 424w, https://substackcdn.com/image/fetch/$s_!ZF86!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 848w, https://substackcdn.com/image/fetch/$s_!ZF86!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 1272w, https://substackcdn.com/image/fetch/$s_!ZF86!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZF86!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png" width="1136" height="617" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:617,&quot;width&quot;:1136,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:103428,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/183215476?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZF86!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 424w, https://substackcdn.com/image/fetch/$s_!ZF86!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 848w, https://substackcdn.com/image/fetch/$s_!ZF86!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 1272w, https://substackcdn.com/image/fetch/$s_!ZF86!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c04d8d9-c4a0-48ff-b4d1-bd4dc96a5d49_1136x617.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>How do you interpret all this?</strong></p><p>Let&#8217;s break it down, around specific capabilities.</p><p>When it comes to<em> software engineering, </em>closed models are way ahead<em>.</em> You may not see that in benchmarks. But if you use these coding tools on a daily basis, you can tell &#8211; closed is much better. Within coding, Anthropic is the undisputed king.</p><p>Coding was the first killer use case for AI, and still represents a significant portion of total spend. Which is why closed models still dominate overall.</p><p>Open-source is also behind in<em> multimodal understanding.</em> That&#8217;s where Gemini is amazing, much better than everyone else at image recognition and video understanding.</p><p>Aside from those two domains &#8212; coding and multimodal &#8212; open and closed models are pretty close. Right now, there&#8217;s a level of performance that closed models deliver. But open-source models are catching up to that.</p><p>A lot of companies are considering both. They may be using closed right now, but open is very close in terms of capability. And when it flips, it will flip very quickly. <em>I think open-source models are about to take over.</em></p><p><strong>That&#8217;s a big statement. Why do you think that?</strong></p><p>We work with a lot of application developers. And for non-programming use cases, the open models are already very close or better.</p><p><strong>And by &#8220;non-programming use cases,&#8221; you mean tool calling and multi-step workflows?</strong></p><p>Yes exactly.</p><p>A good way to understand it&#8230; For tasks where models aren&#8217;t yet 100% reliable &#8211; for example, software engineering, which is maybe at 20-30% today &#8211; you&#8217;ll always want the latest and greatest model. You want every advantage you can get.</p><p>But for workloads where there is a way to hit near-100% accuracy &#8211; and open-source models are starting to saturate those benchmarks &#8211; it&#8217;s more about usability, quality, and cost. That&#8217;s where open-source will win every time.</p><p><strong>Is there a reason why the frontier labs are so much further on coding and multimodal understanding? And how sustainable is that paradigm?</strong></p><p>With <em>coding</em>, Anthropic&#8217;s leadership is focused on programming use cases. The money they spend on acquiring data, environments, and evaluations are all very directed at coding. I respect that, focus is everything.</p><p>For <em>multimodal</em>, Gemini has YouTube. No one else has access to that kind of data.</p><p>Whether they maintain this edge is up to execution. It&#8217;s very hard to predict that, but my guess is they continue to do those very well.</p><p><strong>So is there a scale or threshold where open-source starts to become more interesting for an enterprise customer?</strong></p><p><em>Scale</em> is one aspect. At Fireworks, we can deliver lower total cost of ownership<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> with open-source.</p><p>The other aspect is <em>self-sufficiency</em>. The founders at these model labs have ideologies and constraints they impose on their model behavior. And so self-sufficiency is a big reason why people come to Fireworks.</p><p><strong>By self-sufficiency, you mean they demand a level of customization they&#8217;ll never get with closed models?</strong></p><p>Yes. With their own evaluations and model customization, they can control their own destiny.</p><p>One way to think about it&#8230; imagine if every business only had one supplier. That makes no sense. You want multiple suppliers throughout your supply chain. So companies see that risk, and they want to mitigate it.</p><p>Self-sufficiency through open-source keeps all partnership and M&amp;A exit doors open for your company.</p><div><hr></div><h3><strong>03 | </strong>Opening up the playbook</h3><p><strong><a href="https://fireworks.ai/blog/fireworks-rft">Fireworks RFT</a> is a managed service that lets enterprises fine-tune open-source models, using reinforcement learning.</strong></p><p><strong>Fireworks describes it as giving developers &#8220;the same playbook frontier labs guard so closely&#8221; &#8211; but now applied directly to their own workflows and data.</strong></p><p><strong>How exactly does RFT deliver performance that can match or beat frontier models, like Claude or Gemini?</strong></p><p>RFT is effective because of sample efficiency. You can customize a model with only 100 rows of data.</p><p>Developers are busy. If you tell them they need 10,000 rows of clean data to get started, they know that&#8217;s a day job for ten people to label the data, run quality control, etc. No one will do that.</p><p>But these developers are <em>already</em> collecting failure cases as they build their applications. They are constantly comparing new versions of Gemini, Claude, and GPT in their software. This means they often already have evaluation suites with 50-100 examples. Moving those directly into reinforcement learning is very straightforward.</p><p>On top of that, Fireworks offers a managed service that handles the correct defaults<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> for developers &#8211; numerical consistency, tokenization, GPU failure recovery, etc.</p><p><strong>How do you scale that service across such a diverse customer base?</strong></p><p>Developers are building agents in different languages, frameworks, and environments. RFT only works if we can connect to all of these different agents without asking developers to rewrite their code. As a general principle, if you want to help everyone standardize around a particular methodology, it has to be built in the open.</p><p>So we released <a href="https://fireworks.ai/blog/eval-protocol">Eval Protocol (EP)</a>, an open-source framework that standardizes how any agent connects to RFT for training and evaluation.</p><p><strong>Last March, Andrej Karpathy tweeted that there&#8217;s an &#8220;<a href="https://x.com/karpathy/status/1896266683301659068?lang=en">evaluation crisis</a>,&#8221; that the standard industry AI benchmarks are no longer a good predictor of real-world performance.</strong></p><p><strong>EP seems designed to address this, to help developers run evaluations on their actual workflows.</strong></p><p><strong>For example, one core concept in EP is separating </strong><em><strong>evaluation criteria</strong></em><strong> (what constitutes &#8220;good&#8221; behavior) from the </strong><em><strong>model</strong></em><strong> producing it. That lets teams swap and compare how different models perform on the same, real-world tasks.</strong></p><p><strong>Can you walk through how you designed EP &#8211;&nbsp;and why?</strong></p><p>First, some context. The closed model labs pay millions of dollars to buy simulated websites &#8211; perfect mocks of an e-commerce or airline ticketing system &#8211; to develop and test their models.</p><p>We don&#8217;t believe in that approach. These isolated environments rarely match what&#8217;s happening in production. Test databases might be incomplete, or the UIs behave differently.</p><p>At Fireworks, we help developers build evaluation suites that work with their existing agents, so their RL training environment matches production. <em>This ensures consistency &#8211; across training, environment, and inference.</em></p><p>We also designed EP for developers without any machine learning backgrounds. For TypeScript and Rust developers, who had never touched Python and had built their agents in isolated, hosted environments.</p><p>We realized the only clean way to build evals for them is through <em>observability</em> &#8211; tracing agent outputs in production, labeling successes and failures, and feeding that directly into training. No need to package a working TypeScript agent into a Python container.</p><p>This &#8220;tracing-first&#8221; design means developers can build their agent however they want. We adapt to them.</p><p><strong>Do you think EP grows demand for your own product? The theory being &#8211; as developers better evaluate how a model is performing, they demand more customization, then they turn to Fireworks to enable that.</strong></p><p>Absolutely. I see buyers get more sophisticated when they run proper evaluations.</p><p>The more sophisticated the buyer, the more rational the buyer. And we believe open-source offers a more rational unit economic model.</p><p><strong>Are there any new, cutting edge techniques you&#8217;re excited about?</strong></p><p>Some, like Google&#8217;s new family of architecture <a href="https://arxiv.org/pdf/2501.00663">Titans</a>, which allows models to learn in production, on the fly.</p><p>But I&#8217;m honestly more excited about better<em> data</em>, rather than new techniques.</p><p>Before Fireworks, I was at Meta, moving ad models onto ASICs and GPUs for faster inference. I experimented with lots of new techniques and model architecture. My takeaway: better techniques only give you 20-30% of the gain. Most improvement comes from better data.</p><p><strong>What are the gaps around data you still need to solve for?</strong></p><p>Representativeness. In other words&#8230; Does your evaluation reflect what your customers care about?</p><p>Developers tend to over-index on things they personally notice, or failure cases they hit early on. Their evals rarely match exactly what their customers care about. And if your evals aren&#8217;t representative, you don&#8217;t get much value from RL.</p><p>The gap is translating what their customers want into concrete rubrics a language model can judge against.</p><p><strong>So where are developers struggling? If they know what users want and can articulate their business model, why can&#8217;t they translate that into something a language model can interpret?</strong></p><p>The hard part is moving developers from <em>cardinal</em> judgment (rating output on an absolute scale) to <em>ordinal</em> judgment (comparing several outputs on a relative basis).</p><p>For example, I&#8217;m a terrible judge of art. If you give me ten pieces and ask me to score each on a 1-10 scale, I can&#8217;t give consistent answers. But if you repeatedly show me two pieces and ask which I prefer, I&#8217;ll be much more consistent. I may not be able to articulate my internal rubric for scoring, but I can make reliable comparisons.</p><p>Language models work the same way. Setting up evaluations under a comparative framework is a big improvement.</p><p>Another place developers struggle is writing airtight rubrics. If you now force me to judge art on a 1-10 scale, I need to create a rubric without <em>any</em> loopholes.</p><p>That&#8217;s because LLMs are very creative. If you leave any ambiguities in your system, especially in production, they will exploit that. Writing evaluation criteria that close all those gaps is genuinely difficult, and that&#8217;s where we try to help developers.</p><div><hr></div><h3><strong>04 | </strong>AI predictions in 2026</h3><p><strong>As we wrap, I&#8217;d love to do some rapid-fire questions.</strong></p><p>Shoot!</p><p><strong>Context management</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a><strong> and state</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a><strong> come up often as pain points in building AI applications. Are there any trends you&#8217;re excited about that solve for this?</strong></p><p>As for <em>context management</em>, there have been a few recent <a href="https://arxiv.org/pdf/2510.18234">papers</a> around using images as a way to compress information, what&#8217;s called &#8220;contexts optical compression.&#8221;</p><p>Also, in production, we&#8217;re seeing agentic retrieval replace traditional RAG.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> This is a much more effective approach. Just like a human would say &#8220;I need the Q3 sales data&#8221; and then retrieve that information from the right file, the agent asks itself &#8220;What information do I need?&#8221; and retrieves it on-demand. This is aligned with what Claude is pushing with <a href="https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview">Skills</a> &#8211; organizing tools into folders the agent can navigate, instead of shoving 200 tools into your context.</p><p>So agentic search is picking up, and I expect that will be a growing trend in 2026.</p><p><em>State</em>, on the other hand, is more nuanced and application-specific. Sometimes &#8220;state&#8221; refers to whether you can replay an agent from the same point for error recovery. Sometimes it&#8217;s about how to do hand-offs between agents.</p><p>Agentic retrieval is happening everywhere, but state looks completely different across applications. My guess is there won&#8217;t be consolidation around state any time soon &#8211; it&#8217;s too context-specific.</p><p><strong>What&#8217;s something in AI you think most people get wrong?</strong></p><p>A big debate in Silicon Valley is around humanoid robots. Everyone agrees AI agents in, say, customer service and coding will make money. But people are 50/50 on whether humanoids will succeed.</p><p>I think it will be incredibly difficult for companies focused on humanoids to make money any time soon. There&#8217;s too much real-world complexity. And humanoids face competition from <em>both</em> actual humans <em>and</em> task-specific robots, which have much more efficient form factors for particular jobs.</p><p><strong>What startup categories in AI are you most excited about over the next 2-3 years?</strong></p><p>Without getting too self-promotional, AI infrastructure. There&#8217;s still so much to build.</p><p><strong>Where in the infra stack specifically?</strong></p><p>Companies doing semi-services, semi-evaluations. Those that can help businesses in a particular vertical digitize their processes, set up evaluation suites, and connect those to fine-tuning to improve models.</p><p>I like startups that don&#8217;t shy away from the fact that they do some services. That&#8217;s the right attitude. Customers are adjusting to it more because of the success of Palantir and their forward-deployed engineers.</p><p>Fully embracing that service component in order to help your customers solve a real problem &#8211; that&#8217;s critical.</p><p><strong>Do you think those services will always come from a third-party, like Fireworks? Or for some segments, do see application providers offering this themselves, as part of their offering?</strong></p><p>Third parties provide huge value today, but it depends on the vertical. In San Francisco, everyone says, &#8220;Of course you need evaluations.&#8221; But if you leave SF, it&#8217;s a completely different world. Most companies with the data &#8211; that could own their custom AI stack &#8211; have no idea how to get started.</p><p>There&#8217;s going to be a new BCG/McKinsey emerging from this. They&#8217;ll sit between infrastructure and applications, understanding intimately how agentic workflows should be evaluated, and helping customers build trust in the outputs from their own AI agents.</p><p>Trust is incredibly hard to build. New players who crack that trust layer can charge a lot of money for it.</p><p><strong>This has been awesome. Thank you for the time.</strong></p><p>Thank you for having me.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>Open-source (or &#8220;open-weight&#8221;) models</em> are those where the trained parameters (weights) are publicly released, allowing users to download, run, and customize them locally (unlike closed models, which are accessed via APIs).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>Inference</em> refers to running a trained model to generate outputs from new inputs. Unlike training, which is one-time or periodic, inference is an ongoing operational expense that scales with usage.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>An <em>evaluation suite</em> is a set of tests and benchmarks tailored to measure how well an AI model performs on a specific use case. In other words, the metrics that define &#8220;success&#8221; for a particular software application.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>Reinforcement fine-tuning (RFT)</em> is a training technique that uses feedback on model outputs (i.e., distinguishing &#8220;good&#8221; responses from &#8220;bad&#8221; ones) to train models to perform better on specific tasks. This same technique is used by frontier labs like OpenAI and Anthropic to align base models with human intent, turning them into products like ChatGPT and Claude. Fireworks makes RFT accessible for developers for their custom applications and workflows.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><em>Total cost of ownership (&#8220;TCO&#8221;)</em> is the sum of all costs associated with software over its lifetime, including licensing fees, deployment, maintenance, support, training, and infrastructure. While open-source eliminates licensing fees, it can shift costs to internal engineering time for customization, integration, and support.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><em>Defaults</em> are best-practice configurations that ensure a model trains successfully without crashing or producing inaccurate results. These include <em>numerical consistency</em> (maintaining mathematical precision across different hardware to prevent errors that can degrade performance), <em>tokenization</em> (standardizing how text is converted into the precise numerical format the model expects), and <em>GPU failure recovery</em> (automatically saving progress and resuming training if a chip fails, preventing data loss).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><em>Context management</em> refers to what information an AI model can &#8220;see&#8221; when generating a response. LLMs have a fixed context window &#8211; a limit on how much text or data they can process at once. Developers must decide what to include (e.g., conversation history, relevant documents, user data) and what to leave out.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><em>State</em> refers to tracking where an AI agent is in a multi-step workflow &#8211; e.g., what has been completed, what it knows, and what it&#8217;s trying to accomplish.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><em>Retrieval-augmented generation (RAG)</em> is a technique where a system retrieves relevant documents and &#8220;stuffs&#8221; them into the AI&#8217;s prompt before it generates an answer. This can lead to bloated context windows and inefficient computation. In <em>agentic retrieval,</em> rather than dumping information upfront, the AI decides what to look for and when, pulling information on demand as it reasons through a task.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Gemini Has 650M Users. Now What?]]></title><description><![CDATA[A Conversation with Google DeepMind&#8217;s Logan Kilpatrick]]></description><link>https://www.thetimes.blog/p/gemini-has-650m-users-now-what</link><guid isPermaLink="false">https://www.thetimes.blog/p/gemini-has-650m-users-now-what</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Thu, 18 Dec 2025 21:08:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cvIm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The launch of Google&#8217;s <a href="https://deepmind.google/models/gemini/pro/">Gemini 3 Pro</a> last month marked an inflection point in the AI race.</p><p>Gemini surpassed 650 million monthly active users and now leads on most performance <a href="https://blog.google/products/gemini/gemini-3/#gemini-3">benchmarks</a>. This progress reportedly triggered a &#8220;<a href="https://www.wsj.com/tech/ai/openais-altman-declares-code-red-to-improve-chatgpt-as-google-threatens-ai-lead-7faf5ea6">code red</a>&#8221; at OpenAI, a renewed push to compete and improve ChatGPT, and the release of <a href="https://openai.com/index/introducing-gpt-5-2/">GPT 5.2</a> last week.</p><p>To make sense of this moment, I spoke with someone who&#8217;s worked inside both of the companies that now sit at the center of AI&#8217;s biggest rivalry.</p><p><a href="https://logank.ai/">Logan Kilpatrick</a> is a Product Lead for Google&#8217;s <a href="https://aistudio.google.com/welcome">AI Studio</a> and the <a href="https://ai.google.dev/gemini-api/docs">Gemini API</a>, helping translate Google DeepMind&#8217;s AI research into tools used by millions of developers worldwide. Before Google, he ran Developer Relations at OpenAI. He is also an active angel investor in AI-native companies, including <a href="https://cursor.sh/">Cursor</a> and <a href="https://www.cognition.ai/">Cognition</a>.</p><p>In our conversation, we dig into:</p><ul><li><p>Where Google sees its edge in the model race.</p></li><li><p>How code generation is unlocking product extensibility &#8211; and why every startup now needs to be &#8220;code-adjacent.&#8221;</p></li><li><p>Where token economics actually favors startups over big tech.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cvIm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cvIm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 424w, https://substackcdn.com/image/fetch/$s_!cvIm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 848w, https://substackcdn.com/image/fetch/$s_!cvIm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 1272w, https://substackcdn.com/image/fetch/$s_!cvIm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cvIm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png" width="430" height="226.341189674523" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:469,&quot;width&quot;:891,&quot;resizeWidth&quot;:430,&quot;bytes&quot;:240659,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/182006097?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cvIm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 424w, https://substackcdn.com/image/fetch/$s_!cvIm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 848w, https://substackcdn.com/image/fetch/$s_!cvIm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 1272w, https://substackcdn.com/image/fetch/$s_!cvIm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc65b4caa-bbdd-456c-ada4-490c6ff7d589_891x469.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div></li></ul><div><hr></div><h3><strong>01 | </strong>&#8220;We want to be everywhere.&#8221;</h3><p><strong>EO: The latest Gemini <a href="https://storage.googleapis.com/deepmind-media/Model-Cards/Gemini-3-Pro-Model-Card.pdf">release</a> highlights two present advantages for Google &#8211; (i) model </strong><em><strong>performance</strong></em><strong>, especially in complex reasoning and multimodal understanding, and (ii) </strong><em><strong>distribution</strong></em><strong> &#8212; which is arguably more important.</strong></p><p><strong>You all have done a phenomenal job integrating Gemini across existing products like Search, <a href="https://workspace.google.com/solutions/ai/">Workspace</a>, and Cloud, plus new ones like your developer studio and API, and the new IDE and agent builder <a href="https://antigravity.google/">Antigravity</a>. It&#8217;s an elegant experience working across them.</strong></p><p><strong>Looking at the next 2-3 years, how does Google want to position itself against competitors like OpenAI and Anthropic, especially within developer networks?</strong></p><p><strong>Will this remain a direct, head-to-head competition across almost everything? Or do you see some natural specialization over time?</strong></p><p>LK: I think there&#8217;s definitely things we&#8217;ll do that are unique, at least from a model perspective. Multimodal is a great example of this. It&#8217;s been a focus since the first Gemini release, and it&#8217;s where we really have state-of-the-art capability.</p><p>For example, with <a href="https://blog.google/products/gemini/gemini-3-flash/">Gemini 3 Flash</a> we now have something called visual thinking. We&#8217;ll talk more about it in January. But it lets you use code execution with multimodal understanding to preprocess images &#8211; before analyzing them.</p><p>So if you upload an image with poor contrast, and the model is struggling to interpret it, the model can automatically write Python code to adjust the hues or lighting, then re-interpret that corrected version.</p><p>That&#8217;s one example. But there is an interesting set of things that stem from multimodal, which have been a core advantage for Gemini. That&#8217;s all now amplified.</p><p>We&#8217;re also getting really good at code and agentic tool use. And many other things.</p><p><strong>But what about competition with the other model labs?</strong></p><p>These foundation models are inherently general. Because of that, you will continue to see head-to-head competition.</p><p>Companies are maybe carving out some niches as they go. The Anthropic models have historically been good at code. The OpenAI models have been good at chat. The interesting thing for Google is that we have such a wide set of products. The same model we build for Search is the same model we build for Google Cloud, which is the same model powering parts of the Waymo experience.</p><p>A wide range of products means a wide range of customers. And my hope is that<em> </em>we lead in<em> generalization</em>.</p><p>You&#8217;re seeing that today, and I think you&#8217;ll continue to see that in the future &#8211; especially as we sim-ship these models across more and more products with each new release. That strategy of tight product integration works well for us.</p><p>I was talking to <a href="https://blog.google/authors/koray-kavukcuoglu/">Koray</a> [DeepMind&#8217;s CTO] earlier today about this&#8230; we&#8217;re now seeing <em>much</em> deeper collaboration between our products and models. Historically, DeepMind was mostly research. But we&#8217;ve turned a corner. There&#8217;s now a lot more collaboration between DeepMind and products like search, the Gemini app, etc.</p><p><strong>That was the most impressive part of the launch, for me. The intelligence capabilities felt very native. They enhanced the products I use every day, without changing the overall experience. I&#8217;m excited to see how that evolves.</strong></p><p><strong>But if I&#8217;m hearing you right, it sounds like it will be direct competition across the board. You don&#8217;t see a narrative where, say, Anthropic starts to own the enterprise segment, and Gemini owns consumer, because of the full-stack integration with Search and G Suite. You don&#8217;t see any of that?</strong></p><p>Not for us. We want to be everywhere, and show up for everyone. That&#8217;s the challenge for Google, given our breadth and the number of use cases we touch.</p><p>Just take the developer use cases. AI Studio and the Gemini API have grown 20-30x over the last year. It&#8217;s becoming a substantial business.</p><p>The same is true with Cloud. Google Cloud is the sixth largest enterprise business in the world. We&#8217;re not ceding enterprise to Anthropic.</p><p>What I&#8217;d love to see &#8211; and <a href="https://gemini.google/overview/image-generation/">Nano Banana</a> [Google&#8217;s AI image generator and photo editor] is a great example of this&#8230; you can start to see how that underlying edge in multimodal understanding translates into new, state-of-the-art capabilities like image generation and editing.</p><p>There&#8217;s this cool interplay as you let an advantage like that play out, and see how it manifests into new capabilities and new products.</p><div><hr></div><h3><strong>02 | </strong>&#8220;Every startup is now code-adjacent.&#8221;</h3><p><strong>I want to test a hypothesis with you. That we&#8217;re shifting from a phase of </strong><em><strong>experimentation</strong></em><strong> to one of </strong><em><strong>inference&#8230; </strong></em><strong>from &#8220;</strong><em><strong>can</strong></em><strong> AI do this?&#8221; to &#8220;</strong><em><strong>how best </strong></em><strong>to architect these AI systems?&#8221;</strong></p><p><strong>For example:</strong></p><ul><li><p><strong>Developers are moving from using AI for </strong><em><strong>supplemental tasks</strong></em><strong> (query-response code generation, single API calls) to </strong><em><strong>systems design</strong></em><strong> (building agents with more persistent context and multi-step planning).</strong></p></li><li><p><strong>Backends are now optimized for AI agents as users, not just serving human-facing applications.</strong></p></li><li><p><strong>Autonomous DevOps is creeping into the conversation &#8211; engineers want to delegate more configuration and deployment decisions to AI (though still early here).</strong></p></li></ul><p><strong>Does that match what you&#8217;re seeing? How would you characterize this moment? Anything in your data on developer trends that is especially counterintuitive or surprising?</strong></p><p>One top-level trend &#8211; we are in what I call the &#8220;LLM 2.0&#8221; era. If you look at how early-stage startups were building products a year and a half ago &#8211; versus the last six months &#8211; it looks fundamentally different.</p><p>Historically, to get the models to be useful, you had to do all this scaffolding work to ensure that the model had the right guardrails and configuration.</p><p>But now lots of companies &#8211; especially startups &#8211; are ripping out the things they built a year ago and starting from scratch. The model capability is now so good that what you need, what success looks like, and where you&#8217;re eking out the performance gain over the base model &#8211; that&#8217;s all fundamentally different than before.</p><p>So at the meta-level, that&#8217;s the largest transformation I&#8217;m seeing.</p><p><strong>But what specifically is different, what&#8217;s driving companies to rip-and-replace?</strong></p><p>Our customer base in AI Studio and the API is predominantly startups. With those companies, everything is now agent-first.</p><p>The other big shift is code. Startups outside of developer tools are now making code generation a core capability, a key value driver inside their product. There&#8217;s this interesting trend where almost every startup needs to be &#8220;code-adjacent&#8221; &#8211; because the capability of writing code is so foundational, so applicable across every use case.</p><p><strong>When you say &#8220;code-adjacent,&#8221; what do you mean?</strong></p><p>I&#8217;ll use a random example. A year ago, a product for financial planners wouldn&#8217;t involve any code generation. Today, if you&#8217;re building that product from scratch, there&#8217;s code being generated behind the scenes &#8211; agents writing custom scripts for each planner, each workflow.</p><p>And consumer is next. You wouldn&#8217;t expect consumers to want code. But actually, cutting-edge consumer products are now generating software on demand, based on what the user wants.</p><p>You&#8217;re already seeing this behavior in the Gemini app, with generative UI. Code is becoming the underlying mechanism for delivering personalized information and experiences across every product category. That wasn&#8217;t true 12 months ago.</p><p><strong>So you&#8217;re saying the extensibility of products is orders of magnitude better now, and code generation is what&#8217;s enabling that?</strong></p><p>Exactly.</p><div><hr></div><h3><strong>03 | </strong>&#8220;We&#8217;re going after foundational capabilities.&#8221;</h3><p><strong>Two years ago, people dismissed LLM &#8220;wrappers&#8221; as too brittle to create real, lasting value.</strong></p><p><strong>Today, the consensus is that AI applications actually </strong><em><strong>need</strong></em><strong> proprietary architecture on top of foundation models to work &#8211; <a href="https://arxiv.org/html/2510.09244v1#abstract">subsystems</a> for reasoning, perception, memory, and execution.</strong></p><p><strong>Across these different subsystems, what are the biggest tooling or middleware gaps? Where are developers hitting walls that suggest the infrastructure just isn&#8217;t ready?</strong></p><p><strong>For example, I&#8217;m still hearing about friction in observability and traceability, weak context management, and limited support for cost control.</strong></p><p><strong>Of these, are there any big opportunities for third-party or open-source tooling, things that companies like Google won&#8217;t or can&#8217;t build themselves?</strong></p><p>The observability piece is interesting. There&#8217;s been a lot of investment and companies taking a swing at solving that problem. Maybe none of those products have solved the problem yet, but there are a lot of people trying.</p><p><strong>Well the hard thing is these models are non-deterministic. There&#8217;s no easy way to trace why they made a particular choice. It&#8217;s inherently hard to see what&#8217;s going on under the hood.</strong></p><p>That&#8217;s true. And for us, the question of where we will and won&#8217;t go&#8230; we&#8217;re trying to raise the floor for everyone. We&#8217;re going after foundational capabilities.</p><p>RAG is a good example. In 2024, every developer was spending tons of time on RAG. It was top of mind. So about six weeks ago, we launched a tool that lets developers upload files and query them instantly, no need to build a RAG pipeline yourself.</p><p>That works for 90% of developers. Of course, there will always be advanced scenarios where you need to customize and turn every knob. But for most use cases now, you don&#8217;t.</p><p>That&#8217;s emblematic of the categories we&#8217;ll go after &#8211; <em>foundational components, where Google has a unique advantage from a scale perspective, and we can raise the floor for everyone.</em></p><p>You also mentioned context management&#8230; I think there&#8217;s something really interesting there. Deep Research is a good example of how we&#8217;re starting to address that.</p><p>Part of why I love that product (and we have it in the API now for developers who want to build on top of it) is it handles a lot of context engineering for you.</p><p>End users don&#8217;t want to think about how to get the right information from Point A to Point B, so the model can answer their question. They just want to ask a vaguely-formed question and have the model go find the right context to answer it intelligently.</p><p>That&#8217;s what Deep Research does. It searches the web, but it also connects to Drive and other services. The user doesn&#8217;t have to think about framing the question perfectly or making sure the model has enough context. The agent figures that out.</p><p>That&#8217;s what I&#8217;m most excited about, and I think there are a lot of unique opportunities to make that work better across more products.</p><div><hr></div><h3><strong>04 | </strong>&#8220;There&#8217;s a real advantage to being small.&#8221;</h3><p><strong>You&#8217;re an active angel investor in AI-native startups. What&#8217;s your theory for where value will accrue for new companies (versus incumbents) as the AI stack matures?</strong></p><p><strong>Maybe said differently, if you had to focus capital on one or two startup categories you think will outperform the broader early-stage market, what would they be?</strong></p><p><strong>I&#8217;m most curious about the underlying rationale. What do you think gives a startup a product edge in those domains over big tech?</strong></p><p>I&#8217;ll note that I invest outside my capacity as a Google employee.</p><p>But the underlying truth of startups &#8211; and this isn&#8217;t a unique perspective &#8211; is that value accrues at the frontier. Larger companies are complex systems, focused on many different things. The advantage startups have in the AI era is the ability to move as fast as humanly possible and go after use cases that don&#8217;t quite work yet.</p><p>I was on a show earlier today, and the guest before me was the CEO of <a href="https://wabi.ai/">Wabi</a>, a personal software creation platform. Six months ago, you couldn&#8217;t build that product. The models weren&#8217;t good enough at generating code. Now they are. This company was taking shots on goal, but now, all of a sudden, their product works and they can bring it to the world.</p><p><strong>But is there a category that stands out for this moment? Over the next three years, for example, are you most excited about vertical AI, horizontal workflow automation, creative tooling, etc. What do you think is the frontier </strong><em><strong>right now</strong></em><strong>, given model capability?</strong></p><p>I like the Wabi example because the new ways of generating code to solve bespoke problems is fascinating.</p><p>There&#8217;s another point I want to make, which is that in many AI-native companies, where there&#8217;s deep product customization, the economics actually favor startups. Can Google reasonably afford to put personal software into every product, for every customer, all at once? Maybe technically. But at our scale, it&#8217;s hard to deploy something that token-intensive across hundreds of millions of users.</p><p>Startups get to start small. You have 10,000 users, you build something great for them, you build momentum, and keep raising to fuel that growth. For expensive use cases like personal software &#8211; where you&#8217;re generating a lot of code per user &#8211; there&#8217;s a real advantage to being small. You get to take shots on goal without needing massive infrastructure from day one.</p><p><strong>I hadn&#8217;t thought about the cost structure advantage that way, that&#8217;s interesting.</strong></p><p><strong>I&#8217;ve been most excited about the moment when non-technical users can build complex software.</strong></p><p><strong>There have always been two separate categories &#8211; users who know what they want, and then the people who can build it for them. Once those categories dissolve, the innovation potential feels very big.</strong></p><p>I think it&#8217;s happening right now, which is exciting.</p><p><strong>We&#8217;re on our way! On that note, I&#8217;ll let you hop. Thanks for your perspective.</strong></p><p>My pleasure.</p>]]></content:encoded></item><item><title><![CDATA[Agents Are Learning to Talk]]></title><description><![CDATA[This issue unpacks the emerging standards powering agent networks, and how the dynamics that created companies like Google and Databricks are forming again.]]></description><link>https://www.thetimes.blog/p/agents-are-learning-to-talk</link><guid isPermaLink="false">https://www.thetimes.blog/p/agents-are-learning-to-talk</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Wed, 12 Nov 2025 16:28:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gn-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This issue unpacks the emerging standards powering agent networks, and how the dynamics that created companies like Google and Databricks are forming again.</p><div><hr></div><h3>00 |  A dynamic tracker</h3><p>My <a href="https://www.thetimes.blog/p/can-the-ai-boom-pay-for-itself">last post</a> analyzed whether investment in AI is exceeding demand. A couple readers requested a more regular pulse on how this is trending. I created this <a href="https://literate-basil-8e1.notion.site/AI-Inference-Tracker-114c6090e4438031bc8bce16a3cc654e">tracker</a>, and will update periodically (around earnings calls) as new information becomes available.</p><div><hr></div><h3>01 |  Open standards unlock trillion dollar markets</h3><p><strong>Even if AI goes through a correction in the near-term, demand will eventually catch up. </strong>After the dot-com crash, it took a <a href="https://www.fabricatedknowledge.com/p/lessons-from-history-the-rise-and">decade</a> for bandwidth utilization to match installed capacity. AI could follow a similar path, with intelligence steadily diffusing into software workflows over the next 5-10 years.</p><p>But to realize that promise &#8211; to turn raw model capability into real usage &#8211; AI agents need a more reliable way to coordinate.</p><p>In software, that coordination comes through <em>open standards</em> &#8212; shared, public specs that define how systems exchange data.</p><p><strong>These standards are the precursor to network effects.</strong> They establish<strong> </strong>a shared language that binds applications and workloads into one, cohesive system.</p><p><strong>Open standards unlock trillion-dollar markets, but capture none of the value. The biggest wins in venture capital capitalize on this paradox.</strong></p><p>Early products from companies like Google, Stripe, Snowflake, and Databricks achieved dominance by commercializing the essential services that allow foundational protocols to reach users.</p><p>For example:</p><blockquote><p><strong>|| Google and the Web:</strong> In the 1990s, HTTP and HTML created a universal standard for fetching and serving documents across the Internet. These protocols <em>connected</em> web pages, but offered no way to <em>navigate</em> them. As the number of pages multiplied, discovery became the bottleneck.</p><p>Google&#8217;s PageRank algorithm solved this by ranking pages by the quality of their inbound links, rather than just keyword matching. This made the web searchable &#8211;<strong> turning HTTP from a document transmission protocol into a navigable information network.</strong></p></blockquote><blockquote><p><strong>|| Databricks and Big Data: </strong>In 2010, UC Berkeley released Apache Spark, an open-source framework for efficient, real-time data processing. Unlike earlier frameworks that ran jobs sequentially (read data &#8594; process &#8594; write to disk &#8594; repeat), Spark planned each job upfront, which let workloads run in memory and in parallel across machines. Within 2 years, Spark was <a href="https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf">20x</a> faster than its predecessors &#8211; far more efficient for big data pipelines.</p><p>But Spark was complex to implement. So the creators launched Databricks to handle the operational headaches (cluster management, job scheduling, and failure recovery) that kept Spark from mainstream use &#8211;<strong> turning an academic framework into the backbone of the modern data stack.</strong></p></blockquote><p>The biggest platforms in tech follow a consistent pattern: <strong>identify an emerging open standard &#8594; find the friction points preventing adoption &#8594; build the platform to eliminate them.</strong></p><p>Today, AI is primed for that playbook.</p><div><hr></div><h3><strong>02 | </strong>The landscape of agent protocols</h3><p>There&#8217;s still no unified set of protocols that allows for seamless collaboration between humans, agents from different providers, and third-party tools.</p><p>Over the last few years, a patchwork of AI-native protocols has emerged &#8212; representing the youngest and least developed layer of the ecosystem.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gn-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gn-b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 424w, https://substackcdn.com/image/fetch/$s_!gn-b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 848w, https://substackcdn.com/image/fetch/$s_!gn-b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 1272w, https://substackcdn.com/image/fetch/$s_!gn-b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gn-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png" width="1456" height="774" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:774,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:279881,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/178703143?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gn-b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 424w, https://substackcdn.com/image/fetch/$s_!gn-b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 848w, https://substackcdn.com/image/fetch/$s_!gn-b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 1272w, https://substackcdn.com/image/fetch/$s_!gn-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a7beaf5-fee1-4788-8a12-5126a694aeac_4434x2357.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Source: <a href="https://arxiv.org/pdf/2504.16736">https://arxiv.org/pdf/2504.16736</a></h6><p></p><p>The 2x2 chart below maps the landscape of agent protocols, across two dimensions:</p><ol><li><p>what connect agents to (third-party tools vs. other agents)</p></li><li><p>the domains they operate in (general-purpose vs. domain-specific environments).</p></li></ol><p>Each color indicates the entity that maintains the protocol: commercial vendors like Anthropic and Google (blue), open-source foundations (gray), and academic research groups like Oxford and CMU (red).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tCas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tCas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 424w, https://substackcdn.com/image/fetch/$s_!tCas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 848w, https://substackcdn.com/image/fetch/$s_!tCas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 1272w, https://substackcdn.com/image/fetch/$s_!tCas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tCas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png" width="1456" height="1011" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1011,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:309036,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/178703143?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tCas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 424w, https://substackcdn.com/image/fetch/$s_!tCas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 848w, https://substackcdn.com/image/fetch/$s_!tCas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 1272w, https://substackcdn.com/image/fetch/$s_!tCas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c97e70c-e5f6-4aae-8a32-a54e2acddac8_4333x3008.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A few insights after spending time in the white papers and talking with developers building with these standards:</p><div><hr></div><blockquote><p><strong>(A) The developer ecosystem is beginning to coalesce around three standards.</strong></p></blockquote><p>AI agents are digital assistants that can work independently to get things done. Three new standards are &#8220;rising to the top&#8221; to help these agents connect with apps, access data, and collaborate with one another.</p><ul><li><p><strong>MCP</strong> (Model Context Protocol) connects AI agents to your apps and data</p></li><li><p><strong>A2A</strong> (Agent-2-Agent) allows agents to coordinate within organizations</p></li><li><p><strong>ANP</strong> (Agent Network Protocol) enables agents to transact across the open internet</p></li></ul><p>Some real world examples:</p><ul><li><p>MCP would allow your personal AI assistant to query your Excel budget or dispatch emails via Gmail.</p></li><li><p>A2A would allow your company&#8217;s expense agent to flag unusual spending, coordinate with your compliance agent to verify policy, and escalate to your finance agent for approval.</p></li><li><p>ANP would allow a personal shopping agent to purchase clothes online from a Shopify store that you&#8217;ve never interacted with before.</p></li></ul><p>Each of these standards are <em>complementary</em> &#8211; in theory, one agent could use all three (MCP to access your company&#8217;s tools, A2A to coordinate with coworkers&#8217; agents, and ANP to interact with external services across the internet).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WbEr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WbEr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 424w, https://substackcdn.com/image/fetch/$s_!WbEr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 848w, https://substackcdn.com/image/fetch/$s_!WbEr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 1272w, https://substackcdn.com/image/fetch/$s_!WbEr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WbEr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png" width="1456" height="1586" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1586,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:874537,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/178703143?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WbEr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 424w, https://substackcdn.com/image/fetch/$s_!WbEr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 848w, https://substackcdn.com/image/fetch/$s_!WbEr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 1272w, https://substackcdn.com/image/fetch/$s_!WbEr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f4e4e09-0603-48ec-b0e4-44a0da39606c_4673x5090.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Sources: MCP <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> || A2A <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> || ANP <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></h6><p></p><p>Today, <strong>MCP</strong> and <strong>A2A</strong> are the most widely used, although neither has been fully embraced by the developer community.</p><p>Both of these standards work best inside companies or between trusted partners because they:</p><ul><li><p>are designed for environments where everyone knows and trusts each other (i.e., security is not inherent to the protocol itself).</p></li><li><p>work best for predictable, repeating tasks.</p></li><li><p>connect through known addresses or approved lists &#8211; not open discovery.</p></li></ul><p>A2A supports more complex workflows than MCP. Instead of simple one-and-done requests (like MCP), A2A lets agents have <em>ongoing conversations</em>. They can pause a task, share progress updates with one another, and hand off work to another agent. However, A2A has limitations. Today, only two agents can coordinate on a task at once, and it requires careful setup to run smoothly.</p><p><strong>ANP</strong> takes a different approach. It&#8217;s built for agents to collaborate securely anywhere on the web, even with parties they don&#8217;t know or trust. However, it is a newer standard &#8211; very early in the adoption curve, complex to implement, and computationally heavier to run than A2A or MCP.</p><div><hr></div><blockquote><p><strong>(B) There is no dominant protocol &#8211; yet.</strong></p></blockquote><p>We are probably 12-18 months away from any of these standards reaching critical mass.</p><p>MCP has seen the fastest adoption of the three, driven by: <em>(a)</em> marquee <a href="https://www.anthropic.com/news/model-context-protocol">early adopters</a> (Square, Apollo, Replit, Sourcegraph), and <em>(b)</em> low network dependence (one client and one server are enough to start delivering some value).</p><p>A2A has some buzz, but faces hurdles. Over <a href="https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/#:~:text=Today%2C%20we%E2%80%99re%20launching%20a%20new%2C,be%20able%20to%20work%20across">50</a> partners were announced at launch (including Box, Salesforce, SAP, and MongoDB), but adoption has slowed because it&#8217;s complicated to set up and lacks good tooling for fixing bugs.</p><p>I could see the ecosystem evolving in one of two directions:</p><ol><li><p><em>Specialization:</em> MCP becomes the go-to for connecting to tools, A2A dominates inside companies, and ANP handles the open web. This would be similar to how the early internet developed &#8211; TCP/IP, HTTP, and DNS each solved different problems and worked in tandem to make the web possible.</p></li><li><p><em>Consolidation:</em>An enhanced version of one of these protocols &#8211; or a new one altogether &#8212; becomes the &#8220;universal handshake&#8221; for agent communication, eliminating the need for multiple standards.</p></li></ol><div><hr></div><blockquote><p><strong>(C) Key building blocks are still missing.</strong></p></blockquote><p>Several components still need to be built for these open standards to scale:</p><p><strong>(i) Native security.</strong> Right now, both MCP and A2A depend on external security &#8211; they assume the environment they&#8217;re running in is safe. ANP encrypts data in transit, but the data becomes vulnerable once it&#8217;s decrypted for use. To safely handle sensitive workloads in domains like healthcare, finance, or law, these protocols need stronger, built-in safeguards.</p><p><strong>(ii) Better multi-agent coordination.</strong> Current protocols mostly support one-to-one interactions between agents. This works fine for smaller networks, but breaks down at scale.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> What&#8217;s missing are communication primitives that let agents have better group coordination &#8212; i.e., ways to broadcast updates to relevant subsets, route messages through intermediary agents, and manage network membership natively within the protocol.</p><p><strong>(iii) Shared memory. </strong>Current standards still treat memory as local. MCP and ANP have no memory function in their protocol design (though individual agents can use tools like LlamaIndex or Mem0 to remember things on their own). A2A allows two agents to share state<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> data while collaborating on a task, but this isn&#8217;t accessible to other agents or reusable later. This means agents often redo work that others have already done.</p><div><hr></div><h3><strong>03 | </strong>The investable opportunity</h3><p>Breaking down these protocols reveals where new platforms can be built. Two opportunities stand out today, with more likely to surface as these standards evolve:</p><div><hr></div><blockquote><p><strong>(A) Agentic search</strong></p></blockquote><p>ANP imagines a future where AI agents can find and work with each other across the entire internet.</p><p>Here&#8217;s how it&#8217;s designed to work:</p><p>Every service agent publishes an &#8220;Agent Description&#8221; (AD) &#8211; essentially a resume in a standard format (JSON-LD) that lists what the agent can do, what data it has access to, and any limitations. In aggregate, all these descriptions form an &#8220;Agent Discovery Network&#8221; (ADN), a distributed directory that lets agents find the right partner for any task.</p><p>Client agents can search this directory in two ways:</p><ul><li><p><em>Active search:</em> directly querying specific domain addresses to find known agents</p></li><li><p><em>Passive search:</em> using specialized &#8220;Search Agents&#8221; that constantly crawl the ADN, finding and indexing these Agent Descriptions</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p89p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p89p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 424w, https://substackcdn.com/image/fetch/$s_!p89p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 848w, https://substackcdn.com/image/fetch/$s_!p89p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 1272w, https://substackcdn.com/image/fetch/$s_!p89p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p89p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png" width="1456" height="851" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:851,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:154822,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/178703143?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p89p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 424w, https://substackcdn.com/image/fetch/$s_!p89p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 848w, https://substackcdn.com/image/fetch/$s_!p89p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 1272w, https://substackcdn.com/image/fetch/$s_!p89p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3a96d64-c916-442d-90f2-b86438259b26_2314x1353.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In practice, this agent discovery network and search agents have not been built yet.</p><p><strong>This represents an enormous opportunity.</strong></p><p>If ANP (or a protocol with similar primitives) becomes widely adopted (still a big if!), there&#8217;s an opportunity to build the <em>discovery layer</em> on top, creating a new type of search engine for the agentic web.</p><p>Unlike Google&#8217;s index of static pages, agentic search would rank agents on more real-time data:</p><ul><li><p><em>Identity:</em> Who is this agent, who runs it, and how do I verify it&#8217;s legitimate?</p></li><li><p><em>Capabilities:</em> What can this agent actually do? (chat, analyze data, control systems)</p></li><li><p><em>Connection details:</em> How do I send and receive requests?</p></li><li><p><em>Security requirements:</em> How does it prove who it is, and what permissions do I need to work with it?</p></li><li><p><em>Support contact:</em> Where do I go if something breaks or I need human help?</p></li></ul><p>Because ANP supports blockchain-based identity verification, this search layer could seamlessly connect traditional web services (web 2.0) with blockchain-based services (web 3.0) &#8211; routing requests to whichever option performs best at the lowest cost.</p><p>Instead of a static directory, you&#8217;d have a dynamic, machine-readable index where agents connect based on current needs and conditions. This creates new business models &#8211; agents could pay for better placement, or earn reputation through successful collaborations.</p><p>Over time, this evolves into a <em>self-reinforcing marketplace for machine intelligence</em>, where the most effective agents attract more use, improve through feedback, and rise in visibility &#8211;<strong> a classic network flywheel, built into the fabric of the protocol.</strong></p><div><hr></div><blockquote><p><strong>(B) Shared memory</strong></p></blockquote><p>Today, agent memory remains siloed. Individual agents can remember things locally, and A2A enables temporary context sharing between two agents. But there&#8217;s no persistent, collective memory that multiple agents can access and build upon.</p><p>A <em>shared memory system</em> would give agents a common space to store and access reusable knowledge &#8211; summaries of past work, intermediate calculations, and learned patterns.</p><p><strong>Think GitHub for agent knowledge, where every interaction contributes to a growing repository of reusable intelligence agents can draw from.</strong></p><p>This would complement (not replace) an agent&#8217;s local memory. And it would connect naturally with existing standards, using A2A and ANP identity standards to keep everything organized.</p><p>The first version of this product would be a good fit for companies running their own internal agent networks, where security and access controls are already in place. Over time, it could expand into a shared knowledge layer across organizations &#8211; where each agent&#8217;s work makes the entire network smarter, faster, and cheaper to operate.</p><p>The winning product will need to balance:</p><ul><li><p><em>Versioning:</em> Ensuring past results remain reproducible even as knowledge evolves</p></li><li><p><em>Retention:</em> Balancing what to keep versus what to discard as data accumulates</p></li><li><p><em>Access and security:</em> Keeping sensitive information isolated while sharing general knowledge</p></li><li><p><em>Performance:</em> Doing all this without slowing down real-time agent interactions</p></li></ul><p>If done well, this shifts agents from ephemeral, stateless tools to a collective intelligence that improves over time. <strong>It would give the agentic web its first true substrate for memory.</strong></p><div><hr></div><p>Over the last three years, the AI conversation has been dominated by big tech and the model labs. There will no doubt be a ton of value creation (and destruction) as the model and hardware layers find equilibrium.</p><p><strong>But however that shakes out, there&#8217;s another story unfolding further up the stack </strong>&#8211;&nbsp;how do agents discover each other? Share context? Collaborate across organizations?</p><p><strong>In this story, the bottleneck isn&#8217;t intelligence &#8211; it&#8217;s coordination.</strong></p><p>The open standards addressing this are still in their infancy. Startup value won&#8217;t come from building bigger models, but in the bridges that connect them.</p><p>We&#8217;ve seen this before: Open standards lay the foundation for new markets, then commercial platforms make those markets real.</p><p>&#8594; HTTP created the web&#8230; Google made it navigable.</p><p>&#8594; Spark democratized big data&#8230; Databricks made it usable.</p><p>Agent protocols are up next.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://modelcontextprotocol.io/docs/">Model Context Protocol</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://github.com/modelcontextprotocol">Model Context Protocol &#183; GitHub</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p> <a href="https://arxiv.org/pdf/2503.23278">Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p> <a href="https://arxiv.org/pdf/2506.13538">Model Context Protocol (MCP) at First Glance: Studying the Security and Maintainability of MCP Servers</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p> <a href="https://arxiv.org/pdf/2508.01780">LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p> <a href="https://a2a-protocol.org/">Agent2Agent (A2A) Protocol</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p> <a href="https://github.com/a2aproject/A2A">GitHub - a2aproject/A2A</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p> <a href="https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/">Announcing the Agent2Agent Protocol (A2A) - Google Developers Blog</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p> <a href="https://kodekloud.com/blog/a2a-protocol/">Agent2Agent (A2A) Protocol Explained for Everyone</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p> <a href="https://www.agent-network-protocol.com/">Agent Network Protocol</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p> <a href="https://github.com/agent-network-protocol/AgentNetworkProtocol"> Agent Network Protocol (ANP) &#183; GitHub</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p> <a href="https://arxiv.org/pdf/2508.00007">Agent Network Protocol Technical White Paper</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>In a network of <em>n</em> agents, pairwise coordination requires <em>n*</em>(<em>n</em>&#8211;1)/2 possible connections. In a worst case scenario with 100 agents, that&#8217;s upwards of 4,950 connections. With 1,000 agents, it jumps to nearly 500,000 connections. This makes the system increasingly slow, expensive, and prone to bottlenecks as more agents join the network.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>In computing, <em>state</em> refers to all information that describes the current condition of a system or program at a given moment, including variables, data in memory, and progress within a process.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Can the AI Boom Pay for Itself?]]></title><description><![CDATA[The Math Behind the Mania]]></description><link>https://www.thetimes.blog/p/can-the-ai-boom-pay-for-itself</link><guid isPermaLink="false">https://www.thetimes.blog/p/can-the-ai-boom-pay-for-itself</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Thu, 30 Oct 2025 16:17:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c7ae65f9-345e-437a-ba69-a5a7bc4d4d51_468x332.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Investment in AI infrastructure is accelerating. This issue analyzes whether demand for the technology will keep pace.</p><div><hr></div><blockquote><p><strong>UPDATE - NOV 2025</strong></p></blockquote><p>I&#8217;m now tracking <strong>AI token consumption</strong> in real time. <strong><a href="https://literate-basil-8e1.notion.site/AI-Inference-Tracker-114c6090e4438031bc8bce16a3cc654e">View the live tracker here &#8594;</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://literate-basil-8e1.notion.site/AI-Inference-Tracker-114c6090e4438031bc8bce16a3cc654e" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oliO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 424w, https://substackcdn.com/image/fetch/$s_!oliO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 848w, https://substackcdn.com/image/fetch/$s_!oliO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!oliO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oliO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png" width="400" height="314.010989010989" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1143,&quot;width&quot;:1456,&quot;resizeWidth&quot;:400,&quot;bytes&quot;:179634,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://literate-basil-8e1.notion.site/AI-Inference-Tracker-114c6090e4438031bc8bce16a3cc654e&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/177508715?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oliO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 424w, https://substackcdn.com/image/fetch/$s_!oliO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 848w, https://substackcdn.com/image/fetch/$s_!oliO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!oliO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc66e7b3-d441-440f-b4e1-53153a75a75e_1460x1146.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>For updates on this tracker and fresh analysis on other emerging technology trends, subscribe below.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thetimes.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thetimes.blog/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>01 |</strong>  The rhythm of exuberance</h3><p><strong>In 2000, the Internet looked unstoppable.</strong></p><p>Telecom carriers poured over a quarter of revenue into fiber networks, <a href="https://standards.tiaonline.org/gov_affairs/fcc_filings/documents/Nov13-2002_CapEx_QoS_Final.pdf">betting</a> Internet traffic would double every three months. Market concentration spiked, as investors believed control of infrastructure would cement long-term advantage. The Shiller CAPE ratio, a measure of how expensive the stock market is relative to historical earnings, hit an all-time high of 43.8 (more than double the average of the prior 15 years).</p><p><strong>Then the dominos fell.</strong></p><p>Network utilization lagged. Prices collapsed under excess capacity. Telecom carriers, who had <a href="https://www.wsj.com/articles/SB1040606010738807193">booked</a> revenue from leasing unused fiber to one another, were forced to write down these receivables as losses. Defaults rose. Equity values deflated.</p><p>By 2002, only <a href="https://www.wsj.com/articles/SB1032982764442483713">2.7%</a> of installed fiber was in use.</p><p>The modern Internet &#8211; cloud, streaming, big data, mobile &#8211; <em>eventually</em> arrived. But not fast enough to prevent a correction.</p><p><strong>History doesn&#8217;t repeat itself, but maybe it&#8217;s starting to rhyme.</strong> Big Tech is investing <em>big</em> in AI infrastructure, and investors are buying.</p><p>On inflation-adjusted basis, capital expenditures are now roughly <strong>twice</strong> the level at the 2000 peak. The six largest tech firms now make up <strong>over 30%</strong> of the S&amp;P 500, double the concentration we saw in 2000. The CAPE ratio is the second highest it has ever been.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wCfY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wCfY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 424w, https://substackcdn.com/image/fetch/$s_!wCfY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 848w, https://substackcdn.com/image/fetch/$s_!wCfY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 1272w, https://substackcdn.com/image/fetch/$s_!wCfY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wCfY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png" width="1240" height="744" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:744,&quot;width&quot;:1240,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:147842,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/177508715?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wCfY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 424w, https://substackcdn.com/image/fetch/$s_!wCfY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 848w, https://substackcdn.com/image/fetch/$s_!wCfY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 1272w, https://substackcdn.com/image/fetch/$s_!wCfY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad74e0c0-7eae-4988-9e10-720b9a7d6d2f_1240x744.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Footnotes: <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></h6><p></p><p>Of course, comparing 2000 to 2025 isn&#8217;t apples-to-apples.</p><p>In 2000, investment went into networks of fiber and routers, long-life assets financed mostly with debt.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> Today&#8217;s infrastructure &#8211; GPUs and servers &#8211; depreciates faster, and is funded by <a href="https://www.tikr.com/blog/8-companies-generating-25-free-cash-flow-margins">cash-rich</a> tech giants (though some <a href="https://www.channelfutures.com/data-centers/big-ai-data-center-deals-of-oracle-peers-run-on-debt">leverage</a> is making its way into the system). And unlike in 2000, today&#8217;s market is concentrated around firms with real, 15+ year moats in distribution, data, and developer ecosystems.</p><p><strong>Still, it&#8217;s hard to ignore the parallels:</strong></p><blockquote><p><em>Big Tech is investing ahead of demand, on the premise that:</em></p><ul><li><p>demand for AI will soon rapidly permeate the real economy</p></li><li><p>scaling compute is necessary to maintain a competitive advantage in AI (new <a href="https://arxiv.org/abs/2507.07931">MIT research</a> disputes this, showing steep diminishing returns to additional compute and convergence of model capabilities over time).</p></li></ul></blockquote><blockquote><p><em>Market power is highly concentrated.</em></p></blockquote><blockquote><p><em>Circular financing is back: </em>Chipmakers, cloud providers, and model labs are funding one another through prepaid deals, minimum-spend guarantees, off-balance-sheet projects, and direct debt and equity positions &#8212; arrangements that can inflate real demand.</p></blockquote><p><strong>AI Ecosystem Capital Flows</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IYfF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IYfF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 424w, https://substackcdn.com/image/fetch/$s_!IYfF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 848w, https://substackcdn.com/image/fetch/$s_!IYfF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 1272w, https://substackcdn.com/image/fetch/$s_!IYfF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IYfF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png" width="998" height="530" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9dca6ac-5444-428e-947c-3538989e44fa_998x530.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:530,&quot;width&quot;:998,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:310068,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/177508715?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IYfF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 424w, https://substackcdn.com/image/fetch/$s_!IYfF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 848w, https://substackcdn.com/image/fetch/$s_!IYfF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 1272w, https://substackcdn.com/image/fetch/$s_!IYfF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9dca6ac-5444-428e-947c-3538989e44fa_998x530.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Fundamentally, both cycles rely on the same underlying premise &#8594; <strong>real demand will surge before the bill comes due.</strong></p><div><hr></div><h3><strong>02 |</strong>  Can demand keep up?</h3><p>This context raises two important questions:</p><ol><li><p><strong>How much &#8211; and how fast &#8211; must AI usage grow over the next several years to justify today&#8217;s investment?</strong></p></li><li><p><strong>Is current demand roughly on pace?</strong></p></li></ol><p>I built a simple model to answer these questions. (Take the results as ballpark figures, not precise forecasts.)</p><p>For this analysis, I measure AI demand in <em>inference tokens</em>.</p><p>A token is the basic unit a model uses to process data &#8211; think of them like kilowatt-hours for electricity. <em>Inference</em> refers to tokens used to generate a response to a new user prompt, as opposed to tokens used for training or model development.</p><p>To estimate how much AI usage must grow to justify current investment, the model links <em>infrastructure spending</em> to <em>inference token demand</em> in four steps:</p><p><strong>1. Direct CAPEX spend.</strong> The projected spend on GPUs, servers, and compute hardware &#8211; assets that <em>directly</em> power AI token consumption. This excludes <em>indirect</em> inputs (e.g., investment in real estate, site development, and energy systems). I run these projections in two scenarios &#8211; <em>(1)</em> an aggressive buildout, based on <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers">analyst estimates</a>, and <em>(2)</em> a conservative &#8220;boom-and-bust&#8221; case that mirrors the dot-com-era investment cycle.</p><p><strong>2. Requisite revenue.</strong> The annual customer revenue needed to generate a sustainable return on investment for each scenario. This assumes industry norms around overall asset and inference utilization, depreciation, and CAPEX-to-revenue hurdle rates.</p><p><strong>3. Blended token price.</strong> Estimated from OpenAI, Anthropic, and Gemini list pricing, factoring in standard input-to-output token ratios, how prices have scaled over time, and patterns around how users upgrade to the latest model.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a></p><p><strong>4. Token volume.</strong> Dividing the required revenue by the blended token price yields an estimate of how much AI usage would be needed to justify the build-out.</p><p>Based on this, an aggressive, <strong>$3.1T</strong> buildout over the next 5 years implies AI usage must compound at <strong>2.8x</strong> every year through 2030. A more conservative <strong>$1.3T</strong> path still requires <strong>2.5x</strong> annual growth.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-yg9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-yg9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 424w, https://substackcdn.com/image/fetch/$s_!-yg9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 848w, https://substackcdn.com/image/fetch/$s_!-yg9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 1272w, https://substackcdn.com/image/fetch/$s_!-yg9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-yg9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png" width="1456" height="494" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:494,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:170267,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/177508715?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-yg9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 424w, https://substackcdn.com/image/fetch/$s_!-yg9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 848w, https://substackcdn.com/image/fetch/$s_!-yg9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 1272w, https://substackcdn.com/image/fetch/$s_!-yg9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F027dbe6d-1a52-499e-99b6-a2b0925ce142_1830x621.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Footnotes: <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a></h6><p></p><p>To put this in context, both scenarios would require<strong> a steeper adoption curve than any prior tech cycle</strong>, including the early internet, mobile, and cloud.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MAxB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MAxB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 424w, https://substackcdn.com/image/fetch/$s_!MAxB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 848w, https://substackcdn.com/image/fetch/$s_!MAxB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 1272w, https://substackcdn.com/image/fetch/$s_!MAxB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MAxB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png" width="602" height="517.649708090075" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1031,&quot;width&quot;:1199,&quot;resizeWidth&quot;:602,&quot;bytes&quot;:154755,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/177508715?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MAxB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 424w, https://substackcdn.com/image/fetch/$s_!MAxB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 848w, https://substackcdn.com/image/fetch/$s_!MAxB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 1272w, https://substackcdn.com/image/fetch/$s_!MAxB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5353250-3211-48f8-9b3b-9ccc46177498_1199x1031.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Footnotes: <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-27" href="#footnote-27" target="_self">27</a></h6><p></p><p>Yes, this would be unprecedented growth. But it is not impossible to imagine when you consider the maturity of our digital landscape.</p><p>Today, AI can be shipped through existing products &#8211; SaaS, app stores, browsers &#8211; that billions of people already use. It can plug into workflows with a simple API call, running on cloud infrastructure that abstracts away hosting and scale.</p><p>With distribution this wide and deployment friction this low, adoption certainly could happen faster than we&#8217;ve seen before.</p><div><hr></div><h3><strong>03 | </strong>A question of timing</h3><p>So far, AI consumption is tracking well against these curves.</p><p>Both modeled scenarios require a <strong>~9-12%</strong> cumulative monthly growth rate in inference token consumption over the next couple years. Microsoft is currently tracking to <strong>14%</strong>,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-28" href="#footnote-28" target="_self">28</a> and Alphabet saw <strong>15%</strong> per month from August to September (although that is down from 31% CMGR in the prior ten months).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-29" href="#footnote-29" target="_self">29</a></p><p>It is still too early to know for certain whether we are on the right trajectory. I&#8217;m monitoring three developments that could indicate an impending correction:</p><p><strong>(1) Token growth.</strong> On an absolute basis, inference token demand is still within target. But over the past couple months, there&#8217;s been a deceleration. Watch the next few earnings calls. If monthly token processing falls consistently below 9-10% growth, or if companies stop disclosing the metric altogether, it may suggest the market is over-committed.</p><p><strong>(2) ROI in the real economy.</strong> AI still needs to prove wide, durable value beyond chatbots and coding assistants &#8212; and fast. <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">Many</a> enterprise deployments aren&#8217;t yet showing measurable ROI. On the consumer side, usage is heavily subsidized (only <a href="https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay">~5%</a> of ChatGPT&#8217;s users pay). These dynamics aren&#8217;t unusual this early in a tech cycle. But risk will build as more capital piles into the hardware layer &#8211; <em>especially</em> into assets whose economic life may be <a href="https://cernocapital.com/accounting-for-ai-financial-accounting-issues-and-capital-deployment-in-the-hyperscaler-landscape">shorter</a> than what is being reported to shareholders.</p><p>If model labs keep emphasizing new releases, benchmark scores, and pilot announcements while remaining quiet about large-scale production deployments and customer renewals, it could signal that adoption in the real economy is lagging behind the hype.</p><p><strong>(3) A break in scaling patterns.</strong> This analysis assumes model performance scales in tandem with compute and data. A step-change in AI efficiency could break this pattern &#8211; e.g., if an entirely new model architecture emerges that requires far less compute to achieve the same performance, or if smaller/open-source models prove more effective than large-scale proprietary ones for specific use cases.</p><p>Today, <a href="https://menlovc.com/perspective/2025-mid-year-llm-market-update/">13%</a> of AI workloads run on open-source models. Whether that share rises or falls will be telling.</p><p>I&#8217;m also tracking platforms like <a href="https://fireworks.ai/">Fireworks</a> and <a href="https://deepinfra.com/">DeepInfra</a>, which simplify deployment of smaller and open-source models. If those platforms grow faster than the overall inference market and capture developer mindshare, it would suggest that near-term demand can be met with lower-cost solutions &#8212; thus reducing the need for major new infrastructure spending.</p><div><hr></div><p><strong>Netting it all out, I think we are likely in an over-investment phase.</strong></p><p>Inference token consumption is growing well, but there are signs it&#8217;s starting to slow. Most AI-native products are good for individual productivity in coding or writing, but fall short on full workflow automation or enabling real collaboration. If that persists over the next 12-18 months, it could cap near-term demand.</p><p>This view tracks with researchers like ex-OpenAI&#8217;s Andrej Karpathy, who argues reliable, agentic systems &#8220;<a href="https://www.businessinsider.com/andrej-karpathy-ai-agents-timelines-openai-2025-10">just don&#8217;t work</a>&#8221; yet, and will arrive through a steady, multi-year build &#8212; not overnight.</p><p>If there is some over-investment today, that might be good news for startups. <strong>Historically, when infrastructure supply runs ahead of demand, the next wave of value usually comes from networks and coordination that can be built on top.</strong></p><p>After the 2000 fiber glut, TCP/IP and HTTP standardized communication over the internet, turning surplus bandwidth into the interconnected web. In the big-data era, open frameworks like Apache Hadoop enabled more efficient coordination between data storage and compute workloads, unlocking idle server capacity.</p><p><strong>AI could be on the cusp of a new coordination layer.</strong></p><p>Right now, there&#8217;s a quiet wave of new interoperability standards coming to market. These frameworks have the potential to translate raw model capability into cohesive, multi-agent systems &#8211; creating new ways for agents to communicate, share context, and compose workflows together. Startups that can commercialize and scale these protocols have a chance to build the control layer for networked intelligence, turning individual agents into a functioning economy.</p><p>Next <a href="https://www.thetimes.blog/p/agents-are-learning-to-talk">post</a>, I take a closer look at emerging multi-agent standards &#8211; what they are, how they work, and the market and network dynamics they could unlock.</p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://www.macrotrends.net/stocks/charts/CSCO/cisco/stock-price-history">Cisco - 35 year stock price history</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://finance.yahoo.com/quote/NVDA/">NVDA stock price</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://osbornepartners.com/the-sp500-concentration/">The S&amp;P 500 concentration</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://paulkedrosky.com/weekend-reading-plus-spvs-meta-and-fiber-buildout-2-0/">SPVs, credit, and AI data centers: how a new credit bubble is building in AI data centers</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><a href="https://www.businessinsider.com/big-tech-ai-capex-infrastructure-data-center-wars-2025-10">Why the biggest risk in AI might not be the technology, but the trillion-dollar race to build it</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><a href="https://www.bls.gov/data/inflation_calculator.htm">Bureau of Labor Statistics: CPI Inflation Calculator</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><a href="https://standards.tiaonline.org/gov_affairs/fcc_filings/documents/Nov13-2002_CapEx_QoS_Final.pdf">Investment, capital spending, and service quality in U.S. telecommunications networks</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><a href="https://www.businessinsider.com/big-tech-ai-capex-infrastructure-data-center-wars-2025-10">Why the biggest risk in AI might not be the technology, but the trillion-dollar race to build it</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><a href="https://standards.tiaonline.org/gov_affairs/fcc_filings/documents/Nov13-2002_CapEx_QoS_Final.pdf">Investment, capital spending, and service quality in U.S. telecommunications networks</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p><a href="https://www.datacenterdynamics.com/en/news/amazon-2025-capex-to-reach-100bn-aws-revenue-hit-100bn-in-2024/">Amazon 2025 capex to reach $100bn, AWS 2024 revenue hit $100bn</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p><a href="https://investor.atmeta.com/investor-news/press-release-details/2025/Meta-Reports-Fourth-Quarter-and-Full-Year-2024-Results/">Meta reports fourth quarter and full year 2024 results</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Shiller PE ratio for the S&amp;P 500, based on average inflation-adjusted earnings from the previous 10 years. <a href="https://www.multpl.com/shiller-pe/table/by-year">Shiller PE ratio by year.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Telecom carrier debt peaked at <a href="https://www.latimes.com/archives/la-xpm-2002-jun-30-fi-billions30-story.html">$300 billion</a> in 2000.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p><a href="https://sherwood.news/markets/analyst-a-lot-more-disclosure-needed-on-these-circular-ai-deals/">A lot more disclosure needed on these &#8220;circular&#8221; AI deals.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Even though consumer subscriptions (e.g., ChatGPT Plus) aren&#8217;t billed per token, for simplicity, this analysis assumes that subscription and API pricing effectively average out to similar unit economics, since both rely on the same GPU capacity and cost structure.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>The aggressive, $3.1 trillion direct CAPEX case is anchored in McKinsey&#8217;s estimate that &#8220;technology developers and designers&#8221; (chips and compute hardware) will require roughly <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers">$3.1 trillion</a> through 2030. This aligns directionally with NVIDIA CEO Jensen Huang&#8217;s recent claim that hyperscaler AI-infrastructure spending is already trending toward <a href="https://thecuberesearch.com/jensen-claims-600b-in-annual-capex-spend-wait-what">$600 </a>billion annually.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>Bear case follows a dot-com-era CAPEX <a href="https://paulkedrosky.com/weekend-reading-plus-spvs-meta-and-fiber-buildout-2-0/">growth curve</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Historical 2023-2024 AI CAPEX reflects the average of <a href="https://www.businessinsider.com/big-tech-ai-capex-infrastructure-data-center-wars-2025-10">public</a> <a href="https://paulkedrosky.com/weekend-reading-plus-spvs-meta-and-fiber-buildout-2-0/">estimates</a>. Assumes <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers">60%</a> of total CAPEX goes to <em>direct</em> inference inputs (compute hardware/GPUs, servers, and networking infra). This analysis excludes <em>indirect</em> inputs (real estate, site development, and energy systems).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>Sustainable CAPEX-to-revenue ratio of <a href="https://www.bain.com/insights/how-can-we-meet-ais-insatiable-demand-for-compute-power-technology-report-2025/">4x</a> and straight-line depreciation over 6 years. Note that several hyper-scalers recently extended estimated useful lives for servers/network gear to 5-6 years &#8211; <a href="https://abc.xyz/investor/faqs-and-general-information">Alphabet</a> to 6 years; <a href="https://www.theregister.com/2022/08/02/microsoft_server_life_extension">Microsoft</a> to 6 years; <a href="https://www.thestack.technology/meta-extends-server-life-again-saving-it-2-9-billion/">Meta</a> to 5.5 years &#8211; whereas datacenter GPUs themselves often have shorter service lives (possibly <a href="https://x.com/techfund1/status/1849031571421983140">1-3</a> years at high utilization). Using a 6-year schedule introduces risk if hardware must be retired or replaced sooner than the booked period.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>Assumes 70% average hardware utilization across deployed GPU and data-center infrastructure, consistent with <a href="https://www.tomshardware.com/pc-components/gpus/datacenter-gpu-service-life-can-be-surprisingly-short-only-one-to-three-years-is-expected-according-to-unnamed-google-architect">industry</a> <a href="https://www.aterio.io/blog/how-much-power-would-a-data-center-with-30-000-gpus-consume-in-a-year">estimates</a> of 60&#8211;80%.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>Blended token price is derived from current and historical list rates across <a href="https://www.cloudzero.com/blog/openai-pricing/">OpenAI</a>, <a href="https://www.cloudzero.com/blog/claude-pricing">Anthropic</a>, and <a href="https://www.cloudzero.com/blog/gemini-pricing">Gemini</a>, assuming:: (i) a <a href="https://artificialanalysis.ai/methodology">3:1</a> input-output token ratio, (ii) <a href="https://menlovc.com/perspective/2025-mid-year-llm-market-update/">66%</a> of tokens are served on frontier models, and (iii) a 50/50 split between pay-as-you-go and batch/committed plans (where available). Even though subscriptions like ChatGPT Plus aren&#8217;t billed per token, they still consume the same GPU capacity. For simplicity, this analysis assumes roughly similar unit economics.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Assumes a blended annual token price decrease factor of ~1.8x. This factor is built by weighting model cohorts &#8211; frontier models (<a href="https://menlovc.com/perspective/2025-mid-year-llm-market-update/">66%</a> of tokens) with observed annual price step-downs in the ~1.1&#8211;3.3&#215; range, and older models (34%) with much steeper declines (<a href="https://a16z.com/llmflation-llm-inference-cost">~10x</a>).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>Assumes a three year <a href="https://www.ankursnewsletter.com/p/the-real-price-of-ai-pre-training">inference utilization</a> schedule, 50% in Y1, 80% in Y2, 90% in Y3, reflecting ramp-up patterns as infrastructure utilization stabilizes.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p><a href="https://ourworldindata.org/internet">Internet: our world in data</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p><a href="https://wigle.net/">WiGLE</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p><a href="https://www.mediaculture.fr/wp-content/uploads/2019/09/internet-trends-2018-report-Mary-Meeker.pdf">Internet trends 2018</a>, <a href="http://statista.com/statistics/271539/worldwide-shipments-of-leading-smartphone-vendors-since-2007">Global smartphone shipments from 2007 to 2024</a> </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-27" href="#footnote-anchor-27" class="footnote-number" contenteditable="false" target="_self">27</a><div class="footnote-content"><p><a href="https://www.bondcap.com/report/pdf/Trends_Artificial_Intelligence.pdf">Trends &#8211;&nbsp;artificial intelligence</a>, <a href="https://www.srgresearch.com/articles/2020-the-year-that-cloud-service-revenues-finally-dwarfed-enterprise-spending-on-data-centers">2020: the year that cloud service revenues finally dwarfed enterprise spending on data centers</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-28" href="#footnote-anchor-28" class="footnote-number" contenteditable="false" target="_self">28</a><div class="footnote-content"><p>According to the latest earnings, Microsoft processed roughly <a href="https://www.fool.com/earnings/call-transcripts/2025/08/05/microsoft-msft-q4-2025-earnings-call-transcript/">500T</a> tokens over FY 2025 (up about 7x year-over-year) and <a href="https://www.microsoft.com/en-us/investor/events/fy-2025/earnings-fy-2025-q3">100T</a> in Q3 FY 2025 alone (up 5x year-over-year).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-29" href="#footnote-anchor-29" class="footnote-number" contenteditable="false" target="_self">29</a><div class="footnote-content"><p>Alphabet has reported processing <a href="https://blog.google/inside-google/message-ceo/alphabet-earnings-q2-2025">480T, 980T</a>, and <a href="https://blog.google/inside-google/message-ceo/alphabet-earnings-q3-2025/#introduction">1,300T</a> inference tokens in May, Jul, and Sept 2025 respectively. Although this increase has largely been <a href="https://winsomemarketing.com/ai-in-marketing/googles-1.3-quadrillion-token-boast">attributed</a> to computational effort, not user value, as Google&#8217;s latest model Gemini 2.5 Flash uses approximately 17 times more tokens per request than earlier versions and costs up to 150 times more for reasoning tasks.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Most AI Startups Are Built to Die]]></title><description><![CDATA[As AI makes software more malleable, value is shifting to the orchestration layer.]]></description><link>https://www.thetimes.blog/p/most-ai-startups-are-built-to-die</link><guid isPermaLink="false">https://www.thetimes.blog/p/most-ai-startups-are-built-to-die</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Tue, 24 Jun 2025 13:42:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F8Lk!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F759a8de3-33dd-4676-ad92-0e298c62f56c_636x636.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As AI makes software more malleable, value is shifting to the orchestration layer. This post explores why open-source products are best suited for this paradigm, and how they need to be designed to build a competitive edge.</p><div><hr></div><h3><strong>01 |  </strong>&#8220;Walled gardens&#8221; are a relic.</h3><p>Something&#8217;s shifting in how people are talking about AI. Early excitement seems to be giving way to tacit skepticism &#8211; especially from users implementing these tools in their day-to-day workflows, and developers who understand what&#8217;s going on under the hood.</p><p>This skepticism stems from a fundamental problem: today, most AI tools are only creating the<em> illusion of productivity</em>.</p><p>AI generates content quickly, completing tasks in seconds. But the quality is inconsistent. Without strong fine-tuning, robust memory architecture, and a clear scope around what the system should and should not own, the output can get <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=c133c89c64&amp;e=abab8c0019">mediocre</a>. AI tools introduce subtle errors &#8211; inaccurate references, hallucinations, polished UIs hiding fragile back-ends &#8211; that humans have to fix. This leaves users underwhelmed and less efficient.</p><p>That gap &#8211; between how effective AI feels at first and how it actually performs over time &#8211; is a material risk, and one I don&#8217;t see many investors pricing in. Given where the technology stands today, we may have overestimated the staying power of many AI products, especially those that promise to automate entire workflows or think on our behalf.</p><p>A recent <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=69220788b3&amp;e=abab8c0019">MIT study</a> found that overreliance on tools like ChatGPT can impair memory and reduce our capacity for independent thinking. That&#8217;s not just an ethical concern, it&#8217;s a product risk. When tools take too much control, buyers stop paying attention. They check out faster and are more likely to churn when the results don&#8217;t hold up.</p><p>So why are so many of these tools falling short?</p><p><strong>In part, because of how they&#8217;re designed.</strong></p><p>Most AI products today are built on <em>closed, proprietary architecture</em>. These &#8220;walled garden&#8221; products resemble traditional SaaS and consumer apps &#8211; only now with autocomplete.</p><p>Closed applications predefine the logic, features, and data flows. End users (or agents) can tweak inputs, but they can&#8217;t rewire the underlying source code. Interaction is limited to a UI or managed endpoints that abstract away the nuts and bolts of how the system actually works.</p><p>That abstraction made sense in the SaaS era, when building software required a large team of expensive developers, and when users favored simplicity over control.</p><p>But today, code is cheap to produce and software can be written with natural language. This means users and agents can now inspect, rewire, and extend product capability <em>themselves</em>, a trend that will only <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=04ca19afa6&amp;e=abab8c0019">accelerate</a> over the next 2-3 years.</p><p><strong>In this new paradigm of flexible, participatory software, abstraction from source code creates unnecessary rigidity in the product experience.</strong></p><p><strong>And that rigidity is a competitive liability.</strong></p><p>AI startups building on closed foundations are vulnerable. It&#8217;s easy for incumbents &#8211; who are also built on proprietary stacks, but with broad distribution and deep pockets &#8211; to integrate AI into existing workflows and <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=7a5ee4746c&amp;e=abab8c0019">feature-kill</a> startups overnight.</p><p>And the usual startup defense &#8211; amassing a data moat as the company scales &#8211; is <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=c927b1197b&amp;e=abab8c0019">no longer</a> a reliable strategy. With the <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=60840da338&amp;e=abab8c0019">right training methods</a>, models do not need a large dataset to generate tailored output.</p><p>As a result, many of today&#8217;s AI startups, even those with fast initial growth to $10M or $20M ARR, may struggle to sustain themselves over time.</p><div><hr></div><h3><strong>02 |  </strong>Orchestration is the new moat.</h3><p>So where can startups build a competitive advantage?</p><p><strong>Increasingly, at the orchestration layer.</strong> This is the part of the stack that governs how software <em>behaves</em> &#8212; how inputs are routed, components are selected and sequenced, workflows are modified, and network incentives are managed.</p><p>Startups can win by building products that users can customize and shape <em>themselves</em>.</p><p><strong>This follows a familiar pattern in software: when a component of the software stack suddenly becomes cheap and widely accessible, value shifts to controlling how that component is then manipulated and used</strong>.</p><p>In the cloud era, data storage became the commodity. Value moved to building large, unique data sets, and building systems that could manipulate that data in ways others couldn&#8217;t.</p><p>Today, AI is making code the commodity. Anyone can generate features, and data matters far less. So the value is now in product customization and flexibility &#8211; how users can build and manipulate the code to meet their objectives.</p><p><strong>In a world where orchestration is the moat, open-source companies have a structural advantage over their "walled garden" counterparts.</strong></p><p>Here&#8217;s why:</p><p><strong>1. Open-source products are </strong><em><strong>directly programmable</strong></em><strong>.</strong><br>By exposing core components &#8212; like the code, model weights, or data &#8212; open-source systems let developers directly modify how the product works. This enables far greater flexibility and customization than closed systems, which only allow for configuration within predefined boundaries set by the vendor.</p><p><strong>2. Contributor networks will amplify open-source advantages.</strong><br>Open-source ecosystems benefit from global contributors who extend functionality, add integrations, and patch issues in real time. This speeds up development and keeps the product aligned with the latest tools and use cases &#8212; a key advantage when orchestration and flexibility matter more than ever.</p><p><strong>3. Incumbents must self-cannibalize in order to compete.</strong><br>Open-source startups win by offering free, flexible alternatives to expensive, closed products, then monetizing through add-ons. Incumbents can&#8217;t follow this open-source playbook without exposing their code or allowing self-hosting, moves that would erode their margins and undermine their business model.</p><p><strong>4. Better alignment between usage and revenue.</strong><br>As orchestration becomes the core source of value, companies need pricing models that scale with usage and integration. Traditional software is sold as a fixed-cost product &#8211; priced per seat or per tier &#8211; which breaks down when value comes from ongoing customization and system-level control. Open-source flips this. It moves both costs and revenue to a variable structure. Teams start with a free, self-hosted product, pay only for the infrastructure required to run it, and adopt paid add-ons like hosting, tooling, or support as the product becomes more deeply integrated into their workflow. This creates a more flexible cost structure and better aligns revenue with value creation for the customer.</p><div><hr></div><h3><strong>03 |  </strong>How to make open-source work.</h3><p>Open-source software is well-suited for a world where orchestration is a key differentiator. But without thoughtful design, these systems can become brittle and hard to manage.</p><p>Winning products strike a careful balance&#8230; <em>flexible enough to customize, stable enough to scale.</em></p><p>The best open-source products start with <strong>strong default configurations</strong> that simplify setup and deployment. They need to work out of the box for teams that never customize a thing.</p><p>At the same time, these products need to offer <strong>clear extension points</strong> that guide users on <em>where</em> and <em>how</em> to modify the product without breaking its core logic.</p><p>They must also support <strong>portability</strong>, giving users control over their data, the freedom to deploy across environments, and easy ways to connect into external systems.</p><p>In open-source, every new contribution can introduce bugs or security risks. These systems need strong <strong>quality controls</strong>. CI/CD pipelines must go beyond syntax checks to ensure code changes behave as expected and come from trusted sources. AI can help automate this, but only if the vendors define what &#8220;safe&#8221; and &#8220;correct&#8221; mean, and bake that into the system design.</p><p>In open-source, the core product is usually free, and companies make money by <strong>selling add-ons</strong>. As value shifts to orchestration, the most valuable add-ons will be the ones that help teams run, manage, and extend the system more effectively. That includes:</p><ul><li><p><em>Agentic companions.</em> AI agents that automate tasks, trigger workflows, and coordinate across components.</p></li><li><p><em>Monitoring and alerting</em>. Dashboards to trace system behavior, with automated alerts to detect and debug failures.</p></li><li><p><em>Enterprise integrations.</em> Prebuilt connectors that reduce integration risk and speed up deployment.</p></li><li><p><em>Network access and transaction fees.</em> Connecting to live systems &#8212; like payment networks or third-party services &#8212; and taking a cut of financial or data transactions.</p></li><li><p><em>Collaboration and governance.</em> Shared templates, usage analytics, and role-based access controls for multi-user teams.</p></li></ul><p>Finally, great open systems are <strong>observable</strong>. They help users understand system behavior, debug issues, and ensure modified deployments are running predictably.</p><div><hr></div><p>The best AI products going forward will be the ones users can shape directly.</p><p>You can&#8217;t just wrap a black-box model in a closed app and expect to beat billion-dollar incumbents. They can do that too, and distribute it faster.</p><p>What they <em>can&#8217;t</em> do is open up their systems, at least not without cannibalizing their existing business.</p><p>That&#8217;s where open-source has an edge. These products put users in the driver&#8217;s seat, giving them the power to inspect, remix, and extend the system to fit their needs. This level of participation builds trust, drives retention, and keeps users engaged.</p><p>It&#8217;s a hard product challenge. But it&#8217;s the best path to defensibility. And frankly, more exciting category to invest in, and help founders &#8211; and now users &#8211; build together.</p>]]></content:encoded></item><item><title><![CDATA[Tariffs, Tech Cycles, and a New Economic Reality]]></title><description><![CDATA[Global markets are entering a new chapter, one that demands a shift in how early-stage investors operate.]]></description><link>https://www.thetimes.blog/p/technology-at-a-crossroads</link><guid isPermaLink="false">https://www.thetimes.blog/p/technology-at-a-crossroads</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Mon, 28 Apr 2025 22:38:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!AIpb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Global markets are entering a new chapter, one that demands a shift in how early-stage investors operate. This issue breaks down the structural forces reshaping the macro and technology landscape &#8211; and how value is set to accrue differently going forward.</p><div><hr></div><h3><strong>01 |  </strong>A new economic paradigm</h3><p>I&#8217;ve spent the past few weeks trying to make sense of the shifting economic landscape, and what it means for technology markets. Bridgewater&#8217;s recent newsletter, &#8220;Adapting to a New Reality,&#8221; offers a bold take on this moment:</p><blockquote><p><em><strong>To state the obvious: we are now facing a radically different economic and market environment that threatens the existing world order and monetary system...</strong> This new macroeconomic and geopolitical paradigm is turning past tailwinds into headwinds and reshaping global flows of capital.</em></p><p><em>If you were to list the defining characteristics of recent decades and compare them to today, you&#8217;d struggle to find much overlap. We have been through many big economic shifts over Bridgewater&#8217;s 50-year history, so we don&#8217;t speak lightly when we say that this looks like a once-in-a-generation one.</em></p></blockquote><p>There is still <em>a lot</em> of <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=dfd1e88308&amp;e=abab8c0019">uncertainty</a> around the degree and pace of change. But the trajectory is becoming increasingly clear &#8211; we&#8217;re entering a new environment, unlike anything most of us have invested or operated in before.</p><p>For technology and venture investors, I don&#8217;t think this requires a full reset. But it is a moment for first-principles thinking:</p><p>&#8594; How is the macro environment changing?</p><p>&#8594; Where are we in the technology cycle?</p><p>&#8594; And therefore, how does the venture playbook need to adjust?</p><div><hr></div><h3><strong>02 |  </strong>How is the macro environment changing?</h3><p>Two fundamental shifts are currently underway in the political economy:</p><ol><li><p><strong>Fragmentation of global alliances and supply chains</strong>, which could drive down U.S. equity prices.</p></li><li><p><strong>Growing risk of U.S. stagflation.</strong></p></li></ol><p>Both will fundamentally redirect capital flows and reshape overall demand.</p><div><hr></div><h4><strong>A |  Globalization &#8594; balkanization</strong></h4><p>For the past decade, U.S. stocks have soared, lifted by decades of global trade and easy, cross-border investment. Today, U.S. equities trade at a <em>70% premium</em> to the rest of the world &#8212; meaning they need to attract about 70 cents of every new dollar invested in global equities just to maintain their current value.<strong><sup>[1]</sup></strong><br><br><strong>This is an extreme requirement, a higher bar than we&#8217;ve seen at any point in the last half-century.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AIpb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AIpb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 424w, https://substackcdn.com/image/fetch/$s_!AIpb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 848w, https://substackcdn.com/image/fetch/$s_!AIpb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 1272w, https://substackcdn.com/image/fetch/$s_!AIpb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AIpb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png" width="1456" height="935" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/de7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:935,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:851384,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/162367787?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AIpb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 424w, https://substackcdn.com/image/fetch/$s_!AIpb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 848w, https://substackcdn.com/image/fetch/$s_!AIpb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 1272w, https://substackcdn.com/image/fetch/$s_!AIpb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde7f691c-f344-4885-8ce9-7c97b75dca24_2882x1851.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Source: <strong><a href="https://www.eatonvance.com/insights/articles/2025-a-pivotal-year.html">Eaton Vance</a></strong></h6><p></p><p>For U.S. equities to sustain these valuations, two factors must hold:</p><ol><li><p><strong>A large and persistent capital surplus,</strong> meaning foreign investment into U.S. assets must continue to significantly exceed U.S. investment abroad.</p></li><li><p><strong>Continued access to global markets.</strong> Today, foreign demand is driving a meaningful (and growing) share of U.S. company&#8217;s revenue and earnings.<strong><sup>[2]</sup></strong></p></li></ol><p>Trump&#8217;s agenda &#8212; <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=9cab61a8e2&amp;e=abab8c0019">reversing trade deficits</a>, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=527b4b9d84&amp;e=abab8c0019">U.S. self-sufficiency</a>, and tighter government control over industries like energy and defense &#8212; poses a significant threat to both. The recent U.S. Treasury <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=8548085e88&amp;e=abab8c0019">sell-off</a> and rising yields are early signs of this strain taking hold.</p><p><strong>To be clear, Trump&#8217;s policies are simply the latest (and unnecessarily chaotic) expression of this trend. But international decoupling has been building for years. </strong>Global trade restrictions have <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=4f342ff1fc&amp;e=abab8c0019">tripled</a> since 2018. And even Biden-era policies like the CHIPS and Inflation Reduction Acts reinforce a broader trend toward reshoring and national self-reliance.</p><p>As globalization fractures, expect a compression in U.S. asset prices. The biggest impact and pain will be felt in the most concentrated parts of the market: <strong>(a) public tech megacaps,</strong> and <strong>(b) exits for late-stage private tech companies,</strong> especially those with stretched valuations, additional scale requirements to hit profitability, and cash reserves that can be traced back to foreign investment.</p><div><hr></div><h4><strong>B |  The risk of stagflation</strong></h4><p>Over the past decade, U.S. equities were buoyed by strong growth and earnings, fueled in part by government borrowing. Modest growth in tax receipts, rising expenditures, and strong foreign demand for Treasuries allowed the U.S. to run large deficits without major consequences.<strong><sup>[3]</sup></strong></p><p>This borrowing injected liquidity into the economy, boosting household demand and, in turn, driving steady growth in corporate revenues and profit margins.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jH9t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jH9t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 424w, https://substackcdn.com/image/fetch/$s_!jH9t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 848w, https://substackcdn.com/image/fetch/$s_!jH9t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 1272w, https://substackcdn.com/image/fetch/$s_!jH9t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jH9t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png" width="1226" height="572" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:572,&quot;width&quot;:1226,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:179088,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/162367787?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jH9t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 424w, https://substackcdn.com/image/fetch/$s_!jH9t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 848w, https://substackcdn.com/image/fetch/$s_!jH9t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 1272w, https://substackcdn.com/image/fetch/$s_!jH9t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2f5251-3bd0-45ea-9032-6476e173ff9b_1226x572.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6><strong>Source:</strong> Bridgewater. &#8220;The Growing Risk of U.S. Assets.&#8221; Mar 6, 2025.</h6><p></p><p><strong>Today, that dynamic &#8211; which American workers, consumers, and investors have long taken for granted &#8211; is under threat.</strong></p><p>Rather than expanding the tax base, the current administration is pursuing aggressive spending cuts through DOGE, the upcoming tax bill, and a <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=5bee9dba9b&amp;e=abab8c0019">commitment</a> to public sector deleveraging.</p><p>If government borrowing falls, household demand and business performance will likely weaken. And unless private sector borrowing rises sharply to make up for the gap &#8212; unlikely given tighter credit and waning consumer confidence &#8212; economic growth will slow. At the same time, new tariffs (both U.S.-imposed and retaliatory) are adding inflationary pressure.</p><p><strong>If these trends hold, there&#8217;s a real risk of stagflation</strong>, which would leave the Fed constrained, unable to stimulate the economy without worsening price pressures.</p><p>This isn&#8217;t a comfortable diagnosis. But I do believe a more clinical understanding of these dynamics, combined with a clear sense of where we are in the tech cycle, can help investors navigate the road ahead &#8212; and capture value while others hit pause or cling to the old playbook.</p><div><hr></div><h3><strong>03 | </strong>Where are we in the technology cycle?</h3><p>Technology innovation evolves in 10&#8211;15 year cycles, usually somewhat independent from broader macro forces:</p><blockquote><p><strong>Phase 1 </strong> Each cycle begins with a breakthrough that creates a step-change in capability, followed by a short-term surge of excitement, over-investment, and an eventual crash.</p><p><strong>Phase 2  </strong>New, open standards emerge, enabling broader distribution and sustainable adoption. Commercial applications are built on top of those standards, organizing users, data, and transactions in new ways to capture value, and entrenching themselves through network effects and scale.</p><p><strong>Phase 3  </strong>As adoption matures, the underlying technology hits market saturation. The marginal gains of using the technology shrink, and asymmetric upside fades. A new breakthrough is needed to restart the cycle.</p></blockquote><p>Each cycle builds on the infrastructure of the last, so the opportunity for value creation grows larger with each subsequent wave.</p><p><strong>Today, we are entering a new era of intelligent compute, somewhere between Phase 1 and Phase 2.</strong></p><p>The transformer model triggered a step-change in machine intelligence, catalyzing early excitement and a <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=bf6d56c519&amp;e=abab8c0019">surge of investment</a>. But most AI startups today still rely on proprietary models they don&#8217;t control, and cloud infrastructure built for static data and human-centric workflows.</p><p>Going forward, durable innovation will require new, open standards that can scale AI natively across industries, not just patch it onto legacy systems.</p><p>Today&#8217;s moment in technology evolution parallels the late 1990s, before the dot-com bubble. But the financial and competitive conditions are meaningfully different, suggesting history won&#8217;t repeat itself in exactly the same way:</p><p><strong>The opportunity is larger. </strong>If historical patterns hold, the total addressable surface for new AI applications should be multiples larger than last cycle, as AI unlocks broader capabilities and pushes software deeper into workflows that were previously out of reach.</p><p><strong>The incumbents are stronger.</strong> They are software-native, not legacy industrial models. Big Tech still owns distribution, compute, and capital. These incumbents can iterate faster and absorb innovation more efficiently than what we&#8217;ve seen in prior cycles.</p><p><strong>Private markets are more robust.</strong> Illiquidity, dry powder, and momentum-based psychology have delayed the normal flushing out of risk, making a sharp &#8220;crash&#8221; less likely.</p><div><hr></div><h3><strong>04 |</strong> How does the venture playbook need to adjust?</h3><p>To summarize, U.S. technology markets are facing the following dynamics:</p><ul><li><p><strong>As the U.S. pulls back from the global economy, asset prices, especially in late- and megacap tech, are likely to fall.</strong> Expect a lower ceiling on valuations going forward.</p></li><li><p><strong>The U.S. economy is slowing</strong>, with businesses and consumers facing higher costs, weakening demand, and early signs of waning access to global markets and capital.</p></li><li><p><strong>We are in the early phases of a new technology cycle.</strong> As AI permeates the economy, it will unlock entirely new markets and power economic growth. But in the near-term, private market dynamics may continue to mask fragile business models. Early-stage investors who see this clearly will avoid the worst excesses.</p></li><li><p><strong>Startups today are competing against software-native incumbents.</strong> Without structural product innovation &#8211; new architectures, new data sets, new methods for interoperability &#8211; it will be difficult for startups to maintain a durable edge.</p></li></ul><p>All that said, early-stage venture remains an attractive category.</p><p>It has always been more insulated and idiosyncratic than other alternative assets. It usually represents only a small slice of portfolios and the broader economy. It is long-duration and deflationary by nature. And historically, it is a source of outliers, even in turbulent times.</p><p>But in this new environment, delivering alpha will require a reset, building portfolios that are <em>even more uncorrelated</em> &#8211; anchored by companies with structural product advantages.</p><div><hr></div><p>Early-stage investors today should focus on a few key principles:</p><p></p><h4><strong>A | Centralized distribution &#8594; open-source diffusion</strong></h4><p>In the last cycle, the dominant path to value creation was achieving scale quickly &#8212; one product, everywhere. Today, as globalization fractures, technology must be as flexible, permissionless, and adaptable to local needs as possible.</p><p><strong>Open-source and public architecture is best positioned to power this shift.</strong></p><p>In a world of ever-changing trade restrictions and rising geopolitical conflict, users and businesses will favor software that is easiest to deploy, modify, and control. Open-source offers the most convenient and frictionless path &#8211; minimizing adoption barriers, maximizing local flexibility, and allowing users to extend functionality without waiting on vendors.</p><p>There&#8217;s historical precedent for this. In the 1990s, open-source encryption <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=2aa202c655&amp;e=abab8c0019">moved across borders</a> freely, despite government resistance. Developer collaboration and free distribution outpaced export controls and outperformed centralized, government-sponsored encryption standards.</p><p>The same forces are at play today, amplified by AI-driven development, which makes it easier than ever to customize <em>and</em> monetize open-source applications.</p><p><strong>Code doesn&#8217;t stop at customs.</strong> And in a more fractured world, open-source will be the default engine for technology diffusion.</p><p></p><h4><strong>B | Portfolio diversification &#8594; portfolio concentration</strong></h4><p>In the last macro cycle, broad, early-stage indexing could work. You could hold a wide portfolio, and a handful of moderate winners would be enough to drive acceptable returns. Capital consensus could support mediocre products &#8211; companies could survive longer by raising the next round, keeping valuations and portfolio marks afloat.</p><p><strong>In the new macro, these dynamics are unlikely to hold.</strong></p><p>Expect failure rates to rise as underperforming companies lose access to capital. And if foreign investors walk away from U.S. assets, yields continue to rise, and public debt crowds out private investment, allocators will demand even higher returns in venture capital to justify illiquidity and risk.</p><p>Without high, concentrated ownership at exit, moderate exits won't move the needle on fund-level outcomes.</p><p>Portfolios must concentrate capital earlier and position themselves to be even more uncorrelated &#8212; or risk getting sucked into unacceptable mediocrity.</p><p></p><h4><strong>C | Vertical integration &#8594; composable architecture</strong></h4><p>The last generation of software companies won by building proprietary, vertically integrated products. Salesforce, for example, became a $250B company by centralizing customer data, controlling integrations, and expanding its own feature set across a single, standardized platform. This was the fastest path to dominance &#8212; both technically (ensuring reliability and control) and commercially (scaling a single platform across rapidly integrating markets).</p><p><strong>Today, that model no longer fits.</strong></p><p>Customers need software that: (i) flexes to local requirements, (ii) integrates seamlessly into preexisting networks, (iii) enables user-led customization, and (iv) automates workflows across disparate systems. Advances in encryption and AI now make this possible.</p><p><strong>On this dimension, the incumbents are trapped.</strong> Their economics depend on locking in users and data. They can bolt on integrations and AI automations, but they can't truly open their architecture without cannibalizing the lock-in and rigid product formats that sustain their business models.</p><p><strong>In this new cycle, value will accrue to startups built on open, composable systems</strong> &#8212; architectures that allow users to adapt and orchestrate their own workflows.</p><p>This changes how early-stage investors must underwrite companies. It&#8217;s no longer enough to evaluate static feature differentiation, data moats, and product roadmaps. They must evaluate the underlying architecture:</p><ul><li><p>Can the system adapt and operate across fragmented environments?</p></li><li><p>Does the product gain strength as users customize and extend it?</p></li><li><p>Is there value at the orchestration and coordination layer?</p></li><li><p>Does the architecture naturally align incentives between users and the broader network over time?</p></li></ul><p>The next $250B software company won't scale by perfecting control and ownership. It will scale through flexible architecture.</p><div><hr></div><p>The terrain is shifting, but the fundamentals of good investing remain intact. In times like these, clear thinking, and a willingness to interrogate old assumptions and adapt, are the ultimate edge.</p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p><strong><sup>[1]</sup></strong> This dynamic is especially pronounced in large-cap tech. Today, the top 10% of S&amp;P 500 companies &#8212; led by Apple, Microsoft, NVIDIA, Amazon, Meta, and Alphabet &#8211; account for <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=9cb67daf88&amp;e=abab8c0019">more than half</a> of the total U.S. equity market value. That&#8217;s an historic high.</p><p><strong><sup>[2]</sup></strong> Today, U.S. companies capture ~40% (and growing) of global corporate profits, despite making up less than 20% of global GDP. And ~40% of Magnificent 7 revenue in 2024 came from overseas.</p><p><strong><sup>[3]</sup></strong> Since 2008, the federal deficit has steadily widened due to slow revenue growth (modest GDP expansion, major tax cuts) alongside rising expenditures (aging-related entitlements, crisis-driven spending, and growing interest payments).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iDcO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iDcO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 424w, https://substackcdn.com/image/fetch/$s_!iDcO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 848w, https://substackcdn.com/image/fetch/$s_!iDcO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 1272w, https://substackcdn.com/image/fetch/$s_!iDcO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iDcO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:236414,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/162367787?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iDcO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 424w, https://substackcdn.com/image/fetch/$s_!iDcO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 848w, https://substackcdn.com/image/fetch/$s_!iDcO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 1272w, https://substackcdn.com/image/fetch/$s_!iDcO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeb680d2-64ce-41d2-9542-12047ffc3f05_3047x1705.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6><strong>Source:</strong> <a href="https://www.cbo.gov/data/budget-economic-data">CBO</a></h6>]]></content:encoded></item><item><title><![CDATA[A Guide for the Contrarian LP]]></title><description><![CDATA[with Benedikt Langer]]></description><link>https://www.thetimes.blog/p/a-guide-for-the-contrarian-lp</link><guid isPermaLink="false">https://www.thetimes.blog/p/a-guide-for-the-contrarian-lp</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Sun, 13 Apr 2025 16:04:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tDgw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For this issue, I sat down with someone I've been following for a while, <a href="https://www.linkedin.com/in/benedikt-langer/">Benedikt Langer</a>.</p><p>Benedikt is an LP in emerging venture funds and author of <a href="https://embracingemergence.beehiiv.com/">Embracing Emergence</a>, a blog I love for its refreshing, first principles take on fund investing.</p><p>It&#8217;s no secret that we're in a different environment for new venture funds. In 2024, total fundraising <a href="https://www.junipersquare.com/blog/vc-q4-2024">declined 21%</a>, and three-quarters of all those commitments <a href="https://pitchbook.com/news/articles/us-vc-fundraising-concentration-andreessen-horowitz">concentrated</a> in just 30 large firms. The market is still recalibrating from the 2020-2022 bubble, and liquidity constraints and macro uncertainty continue to extend these dynamics.</p><p>At the same time, we&#8217;re seeing a profound technology shift, with AI reshaping every industry. And there's growing evidence that <a href="https://sante.com/wp-content/uploads/2023/11/Why-Venture-Capital-Does-Not-Scale-2023-Update.pdf">smaller</a>, <a href="https://www.cambridgeassociates.com/insight/venture-capital-positively-disrupts-intergenerational-investing/">emerging</a> firms are a more reliable source of alpha, especially in moments when market instability intersects with rapid technological progress.</p><p>Storied firms are <a href="https://www.itamarnovick.com/how-does-early-stage-venture-capital-perform-in-a-recession-or-its-time-to-build-invest/">born</a> in times like this, when incumbents rest on the old playbook, prices reset, and market shocks shake up the marketplace of talent and ideas.</p><p>If a renaissance is somewhere on the horizon, perhaps now is the time to reframe the value proposition of new firms, and to refine the playbook for how to evaluate them. I wanted Benedikt's take, and to put the LP and GP perspectives into a more candid conversation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tDgw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tDgw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 424w, https://substackcdn.com/image/fetch/$s_!tDgw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 848w, https://substackcdn.com/image/fetch/$s_!tDgw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 1272w, https://substackcdn.com/image/fetch/$s_!tDgw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tDgw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png" width="534" height="298.5412087912088" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:814,&quot;width&quot;:1456,&quot;resizeWidth&quot;:534,&quot;bytes&quot;:814361,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.thetimes.blog/i/161199033?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tDgw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 424w, https://substackcdn.com/image/fetch/$s_!tDgw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 848w, https://substackcdn.com/image/fetch/$s_!tDgw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 1272w, https://substackcdn.com/image/fetch/$s_!tDgw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25bda7c-4b2e-4c93-adb0-edac85b5408f_2140x1197.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>01 |</strong> &#8220;Everyone has a track record.&#8221;</h3><p><strong>EO: I want to start with what you look for in emerging managers. You&#8217;ve written about concepts like <a href="https://embracingemergence.beehiiv.com/p/look-emerging-managers">proximity</a>, <a href="https://embracingemergence.beehiiv.com/p/magic-emerging-managers-possess">magic</a>, <a href="https://embracingemergence.beehiiv.com/p/fund-name-pitches-fund-better-deck">linguistic consistency</a>. I like those frameworks, more than traditional heuristics, like a manager&#8217;s &#8220;right to win&#8221; or &#8220;unique sourcing.&#8221;</strong></p><p><strong>Could you distill the 2-3 things you center on when figuring out if an emerging manager has what it takes?</strong></p><p>BL: At the family office, we primarily look for <em>a GP&#8217;s ability to invest in people</em>. This is especially relevant in venture.</p><p>There&#8217;s a concept I love from Thomas Merton, a monk and writer. He wrote about the <em>false self</em>. It&#8217;s what we tell ourselves about who we are, and then in turn what we tell the world about who we are. But that sense of self is never completely accurate &#8211; it&#8217;s a false narrative we all carry.</p><p>VC as an industry is more prone to the false self.</p><p>For people building a startup, who dream of changing the world &#8211; they tell themselves a story that is <em>very</em> <em>unlikely</em> to come true, because the big outcomes so rarely materialize. So there&#8217;s a heightened need for venture GPs to see through the false self, to pinpoint this discrepancy in the founders they invest in.</p><p>Personal self-awareness is one of the clearest signals of a GP's ability to do this. The more integrated someone is themselves, the more clearly they can see where others are headed.</p><p><strong>EO: And what else do you look for?</strong></p><p>BL: A balance between thoughtfulness and conviction. Too much thoughtfulness, and you never get anything done. Too much conviction, and you end up running in the wrong direction.</p><p>I can usually get a sense of this quickly, on the first call. When a manager describes how they think about their fund, why they picked the strategy, and what they believe the world will look like in 5-10 years, I get a window into how thoughtful they are &#8212; what they care about, how they&#8217;ve developed their taste, what motivates them.</p><p>And when we talk about how they&#8217;ve executed in the different seasons of their life, I can assess their conviction and how they reach decisions.</p><p><strong>How do you validate those attributes? Some people are great at marketing and lack substance, and vice versa. It feels like there&#8217;s a lot of room for LPs to make Type I and II errors if they&#8217;re not evaluating this well.</strong></p><p>I think this ties to what a few LPs might be doing wrong. LPs invest according to what they <em>think</em> they should be doing &#8211; not what they&#8217;re actually good at.</p><p>For example, I&#8217;m a pastor, and the son of the family I work for is also a pastor. We&#8217;ve seen people go through all sorts of stages in their life, including marriages ending and businesses falling apart. We&#8217;ve seen how people say certain things and don&#8217;t live up to their word, or how they do.</p><p>Our strength is assessing people. We trust our discernment there.</p><p>We get under that by asking GPs about their upbringing and their friendships, to understand their story and if their past is reflective of what they&#8217;re saying in the present. The key is to ask questions that reveal value in hidden places, answers you otherwise wouldn&#8217;t hear.</p><p><strong>Can you give an example?</strong></p><p>Yeah, we&#8217;ve asked GPs about their insecurities quite a bit, and that opens the conversation.</p><p>It&#8217;s really about second-order questions. We&#8217;ll ask &#8211; <em>what do you look for in founders?</em> You nearly always hear the standard attributes &#8211; grit, humility, etc. But the moment you ask &#8211; <em>how do you assess humility?</em> &#8211; most people stop having answers. If a GP has a strong, specific answer about what humility looks like in a founder, it tells me they&#8217;ve really thought about it. They&#8217;ve likely seen it firsthand, and they&#8217;ve probably reflected on what humility looks like in themselves too.</p><p>That kind of self-awareness and applied insight is what we&#8217;re looking for.</p><p><strong>It&#8217;s interesting &#8212; as a GP, I&#8217;m rarely asked deep, personal questions. But as you're talking, I'm realizing those are often the clearest windows into my judgment and thinking patterns.</strong></p><p><strong>If these questions get skipped or glossed over, there&#8217;s risk of missing the experiences and traits that can predict outperformance in the context of a new venture: clarity under pressure, adaptability, lateral thinking. The things you can&#8217;t see in a deck or data room, but that determine how someone will show up when they're bringing a new vision to life.</strong></p><p>Exactly. That&#8217;s the stuff that matters.</p><p><strong>So why do you think LPs aren&#8217;t diving enough into those signals?</strong></p><p>It&#8217;s a misunderstanding of what is provable. Everyone says, &#8220;Fund 1 managers don&#8217;t have track records.&#8221; But of course they do. Everyone has a track record.</p><p>Are GPs well positioned in their thesis area? Have they been scrappy? Have they built long-term relationships? Those threads are more predictive than crunching the numbers on a track record from a prior firm, which doesn&#8217;t map to the new context a GP is operating in.</p><p>I learn the answers from stories, references, and listening closely.</p><p>For example, if a GP tells me &#8211; <em>&#8220;This founder said, &#8216;I need you in this round,&#8217;&#8221;</em> or <em>&#8220;I took a bet on this founder before anyone else saw it,&#8221;</em> &#8212; and if the founder validates that &#8212; that says something about how you build long-term relationships. I can&#8217;t skip over that.</p><p>I can also assess a GP&#8217;s general thoughtfulness &#8212; whether they have a perspective about where the world is heading in 3, 5, 10 years. I can gauge their ability to raise capital by watching how they pitch. I can assess whether they&#8217;re commercially minded by seeing how they think about building a generational firm, not just a personal brand.</p><p>Most LPs don&#8217;t ask the right questions on Fund 1 calls because they haven&#8217;t re-defined or thought differently about what is actually provable. </p><div><hr></div><h3><strong>02 | </strong>&#8220;Can a new context lead to even better performance?&#8221;</h3><p><strong>Correct me if I'm wrong, but there seems to be a bias for GPs that have &#8220;done it before.&#8221; LPs like the same strategy, from the same team. The logical extreme of that is defaulting to wait for Fund 2 or 3, when there&#8217;s more certainty &#8211; or maybe, more </strong><em><strong>perceived</strong></em><strong> certainty.</strong></p><p><strong>This is very, very different from how I evaluate founders and startups.</strong></p><p><strong>I&#8217;m looking for raw capabilities in people &#8211; non-obvious, future indicators. I try to look beyond the common signals of success. Because if something is a known quantity, that means there&#8217;s consensus. But consensus isn&#8217;t where you find alpha. The return profile of your average venture fund makes that concept pretty clear.</strong></p><p><strong>Is this LP orientation around certainty a structural problem in how capital gets allocated? Or simply a rational reflection of the different risk/return profile between a startup versus a fund?</strong></p><p>It might be both.</p><p>There is a difference in what LPs and GPs are underwriting. When GPs evaluate a founder, they only need them to be successful once. If a founder hits a home run, they did their job. But when I bet on a GP, I need them to pick the right teams and products with consistency. I need a different level of repeatability.</p><p>But I do agree there is an unnecessary bias toward waiting for more information, which can lead LPs to miss out on great Fund 1 returns, or lose downstream access.</p><p>By Fund 2, you&#8217;ll know if the GP has executed against their strategy, if they did what they said they were going to do. But you won&#8217;t have much more certainty around outcomes. Even at Fund 2 and 3, you&#8217;re still backing potential.</p><p><strong>I think what also gets missed is how much context matters. It is very different to operate inside someone else&#8217;s machine versus building your own.</strong></p><p><strong>LPs focus on the risk of that. They aren&#8217;t sure how a GP will fare managing their own firm. So they punt.</strong></p><p><strong>But what about the upside potential?</strong></p><p><strong>I don't have data here, but I'd argue the best founder-GPs &#8211; once they have the resources to self-actualize &#8211; usually shine so much brighter running their own strategy, in their own context, than what they achieved inside someone else&#8217;s shop.</strong></p><p><strong>And you&#8217;d miss those returns if you&#8217;re only asking, &#8220;Have they done this exact thing before?&#8221;</strong></p><p>I agree. At established firms, you have brand recognition, infrastructure, resources, a set culture, even a different financial cushion. That&#8217;s a completely different way of investing. It changes everything.</p><p>And LPs have to think about not just what could go wrong, but can a new context lead to even better performance?</p><div><hr></div><h3><strong>03 | </strong>"Managers need to embrace the productivity that comes with inefficiency."</h3><p><strong>I believe one of the biggest asymmetries in the emerging GP-LP dynamic is a vastly different discount rate on time.</strong></p><p><strong>LPs have long relationship-building timelines. But emerging GPs need money yesterday. Emerging GPs can easily get caught in the chicken-egg problem &#8211; asked to show proof points before they are afforded the capital or resources to generate them.</strong></p><p><strong>What do you think about this? Do you think the best emerging GPs or LPs manage or value their time differently than others, in a way that is more aligned?</strong></p><p>One GP comes to mind. She was very deliberate about her follow up cadence, and accelerated fundraising by dropping LPs quickly, instead of keeping up with everybody.</p><p>It also depends on the type of LP. Family offices will always be slow, since mostly they&#8217;re optimizing for not losing money. Fund of funds might be more aligned with a GP&#8217;s pace.</p><p><strong>Maybe another way to think about this &#8212; LPs, like venture capitalists, rely on proxies to save time. For example, if a GP is coming out of a name-brand firm, or you get a warm intro from someone you trust, those signals drive momentum.</strong></p><p><strong>Are there any creative or nontraditional signals you&#8217;ve seen LPs rely on that help you get to a deeper sense of a manager, more efficiently than the usual path?</strong></p><p>I think there are a lot of shortcuts LPs take that don&#8217;t end up working.</p><p>For example, a lot of LPs rely on aggressive personal branding or a large following to indicate that a GP attracts founders. But I&#8217;m not convinced that will lead to performance. It&#8217;s a red herring.</p><p>One analogy that might help &#8211; a lot of wealthy people contract artists to create custom pieces, or they hire the same interior designers for multiple projects. The artist becomes their conduit to the creative world, their way to access a space they otherwise wouldn&#8217;t have the ability or time to pursue.</p><p>Fund 1s are the artists for these LPs. These GPs let them participate in new technology, to be at the frontier. So if you manage the relationship in a way where LPs feel that, it can create a connection where they want to be on this journey with you.</p><p><strong>I love this analogy. I started my career at Bridgewater. And Ray Dalio was brilliant at building these types of connections with his early capital partners.</strong></p><p><strong>Unlike venture, redemption is a real risk in hedge funds. As soon as performance dips, LPs might want out. Most firms manage this contractually &#8211; through lock-up periods and notice requirements. But Ray flipped the script. He made sure his capital partners understood macro through a long-term lens, through advisory work and the Daily Observations newsletter. He cultivated a much deeper LP-GP dialogue.</strong></p><p><strong>Trust became a tangible asset, and led to a more durable capital base. This powered a long-term investment strategy.</strong></p><p><strong>One of the reasons we launched Timespan is because we believe the LP-GP relationship in venture can be done better. There are still all sorts of perverse incentives in VC. Valuations are opaque, long-term decisions are made off short-term data.</strong></p><p><strong>But to your point &#8212; if LPs, especially in a Fund 1, want to be part of the artistry of technology, maybe there&#8217;s room to innovate around how that gets done.</strong></p><p>Yes, although this goes back to your question on how LPs value time differently than emerging managers.</p><p>Investing in relationships takes time, wisdom, and discernment. Managers need to embrace the productivity that comes with inefficiency. The best relationships aren&#8217;t easy to build. The Bridgewater example proves that out.</p><p>One tactic is to systematize your touch points. A good analogy is Toast &#8212; the POS system restaurants use now. Before Toast, waiters had to write everything down on paper, which broke the flow of conversation with customers. But with technology, they can take your order while continuing the conversation. They check you out on the spot. The system didn&#8217;t take time away from the customer &#8212; it gave them more of it.</p><p><strong>And how would you advise LPs and GPs on how to decide who to spend time with? You can spin your wheels courting someone who will never be the right fit. Or pass too quickly and miss the perfect partner, all because of one awkward call.</strong></p><p>Some LPs value those relationship building moments more than others, that inefficient expenditure of time. You and I could probably spend time on calls like this over and over, and it would keep compounding the relationship.</p><p>But some LPs who just need to deploy, or keep relationships warm until they are ready to deploy. More of a utilitarian approach.</p><p>It&#8217;s hard, but just know who you&#8217;re dealing with and what they want.</p><p><strong>I want to go back to transparency, because I think there may be something more structural at play than relationship building.</strong></p><p><strong>Venture is an idiosyncratic asset class. Valuations can get divorced from fundamentals quickly. Technology is moving at a fast clip. I fear many GPs &#8211; and LPs by extension &#8211; don&#8217;t use or try the products they&#8217;re investing in.</strong></p><p><strong>Is that depth something LPs are dialed into, or would even want to be more dialed into? Have you seen GPs successfully bring LPs into the fold, so they better understand the products their capital is nurturing, and understand the nuanced ways those products create value?</strong></p><p><strong>I have to believe there&#8217;s a way for GPs to build in a faster feedback loop &#8211; way ahead of DPI &#8211; so LPs can improve their decision-making and processes?</strong></p><p>That&#8217;s a very good question, and one I haven&#8217;t thought about much, which might be proving your point.</p><p>The quick answer is yes, venture is more obscure than other asset classes. But I only know the relationship I have with my GPs. We text, we go out to dinner, we have personal conversations. This opens the door to getting more under the hood.</p><p>I haven&#8217;t seen a systemized way of doing that differently, but one underutilized resource is the quarterly update. I recently surveyed LPs, asking them how important the quarterly update is. Nearly everyone ranked it as highly important. I was surprised by that, since most GPs feel like nobody reads them.</p><p>I think these best practice communication channels could be modernized, and lead to better transparency. But I need to ponder this more, it&#8217;s a good question.</p><p><strong>From the GP perspective, I feel like too many of us succumb to fear. Startups are volatile, products don&#8217;t evolve linearly. It&#8217;s scary to tell an LP, when you're a new firm trying to prove yourself, that something&#8217;s not working, especially if you know they probably aren&#8217;t hearing that from your peers.</strong></p><p><strong>But there&#8217;s a serendipity and connection that can happen if you lay it on the table, assuming both parties are thinking long-term.</strong></p><p>That&#8217;s a good point. You&#8217;re incentivized to keep things positive. Every existing LP is a potential partner for your next fund. That&#8217;s a very unique dynamic, and not every industry has it.</p><div><hr></div><h3><strong>04 | </strong>&#8220;Linguistic consistency sparks the imagination.&#8221;</h3><p><strong>Let&#8217;s talk about manager differentiation.</strong></p><p><strong>There&#8217;s a quiet debate among GPs &#8211; is differentiation mostly a marketing exercise, or does it truly drive performance?</strong></p><p><strong>The argument goes &#8211; the best firms historically didn&#8217;t start with a one-of-a-kind thesis or sourcing engine that was impossible to replicate. Their success wasn't because they had something that no one else could tap into.</strong></p><p><strong>It was because they just showed up differently. They were more curious, more tenacious, more willing to adapt and take risks when others didn't. They had a clear, directionally accurate point of view on where to find value in technology, but it wasn&#8217;t a radical departure from other investors. And they backed it with good, old fashioned hard work.</strong></p><p><strong>What&#8217;s your take? Do the best emerging GPs have a real, structural differentiator? Does differentiation matter more now than in the past?</strong></p><p><strong>Or is the market getting distorted &#8211; are LPs creating artificial distinctions to make sense of a crowded market?</strong></p><p>The other day, I reviewed 30 emerging GP pitch decks. They were all effectively the same. There were some great nuggets. But the storytelling wasn&#8217;t clear.</p><p>Put yourself in an LP&#8217;s shoes &#8211; ideally they&#8217;re looking at 500 to 1,000 managers in a year. I know you probably look at more startups than I do funds, but funds blend together very, very quickly. And some GPs really lack the ability to communicate what it is they&#8217;re doing and why it matters.</p><p>There are some real assets GPs underutilize. For example, most GPs don&#8217;t articulate why they chose their fund name. Can I ask, why did you pick the name Timespan?</p><p><strong>My co-founder and I like to nerd out over tech history. To think about technology on a timeline &#8211; so we can spot patterns and filter out legacy software.</strong></p><p><strong>Thinking this way keeps us focused only on the products that are truly novel, that structurally could not have been built just a few years ago. And it's a repeatable and adaptive way to build an investment mandate, so our firm can remain at the forefront and endure over a long period of time.</strong></p><p>See, that communicates a lot. It says that you value analyzing the past, that you have a general understanding of cause-and-effect, and that you will most likely apply that same thinking to markets and people. And that will carry through on the big and small things you do.</p><p>Differentiation is important, because (1) GPs are generally not good at communicating it, (2) we&#8217;re in a very crowded market, there is a lot of noise, and (3) LPs don&#8217;t always ask the right questions to discern it themselves. We lack the attention span to really pinpoint differentiation. We move on quickly. One bad, 30 minute intro call, and the probability your fund gets dropped is very high.</p><p>I believe there&#8217;s a famous clip of someone asking Steve Jobs what he learned at Apple after his first stint. And he said he now takes the long view on people.</p><p>I think as LPs, we may have lost sight of that. We don&#8217;t get close enough to people, and so we substitute by relying on a pitch deck, on LinkedIn or Twitter.</p><p>But if we look at the generational firms, they did signal <em>something</em>. Maybe it wasn&#8217;t in their thesis, or in the exact vision for the firm they were going to build. But they could articulate who they are and what they stand for. Maybe it&#8217;s less important to communicate your thesis in some differentiated way, and instead to communicate who you are in a way that leaves an impression.</p><p><strong>I think that&#8217;s right. If GPs think of it as a marketing exercise, as wordsmithing, then that&#8217;s what it becomes. But as a GP, you have to stand out, however you can. That&#8217;s on you.</strong></p><p><strong>And the good LPs, that are a fit for your specific fund, will move on that when they see it. They will take the time to peel back the onion and validate that, and come up with an independent thesis on the qualities that make an emerging GP worth the bet, or not.</strong></p><p>I really agree.</p><p>J.R. Tolkien wrote one of the best, most recognized books in the English language, <em>Lord of the Rings</em>. He was a linguist at his core. When he was a kid, he studied old languages. He came up with the elf language and the names of people and places.</p><p>That linguistic consistency sparks the imagination. It can set great funds apart.</p><p>One GP I reference a lot is <a href="https://www.linkedin.com/in/erica-wenger-ms-811b80132">Erica Wenger</a>. She compares the founders she invests in to elephants. They are communal animals. They have bigger ears than mouths, meaning they listen more than they speak.</p><p>This is not a concrete play-by-play of her diligence process. But it takes me into her world of assessing founders. I get a sense of where she spends time when she meets a new entrepreneur. Sure, there&#8217;s some marketing to it. But it goes beyond that. It positions her a little bit differently, and lets me into her process.</p><div><hr></div><h3><strong>05 |</strong> "Don't fall into the trap of reductionism."</h3><p><strong>Last question &#8211; what advice would you give LPs investing into emerging managers?</strong></p><p>Don&#8217;t fall into the trap of reductionism. This is not a business of clear recipes, of completing a checklist. Be as holistic as possible.</p><p><strong>And what about advice for emerging managers just getting started?</strong></p><p>Think deeply about why you want to do this. Don&#8217;t do this for financial reasons. It&#8217;s too hard. On a probability-weighted basis there are more lucrative paths. There has to be a ton of motivation and purpose behind doing it.</p><p><strong>That&#8217;s so true. And that same advice extends to founders building companies as well.</strong></p>]]></content:encoded></item><item><title><![CDATA[Open-Source is Eating Software]]></title><description><![CDATA[Three new business models for open-source products.]]></description><link>https://www.thetimes.blog/p/open-source-is-eating-software</link><guid isPermaLink="false">https://www.thetimes.blog/p/open-source-is-eating-software</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Mon, 24 Mar 2025 14:50:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F8Lk!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F759a8de3-33dd-4676-ad92-0e298c62f56c_636x636.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>01 |</strong> An overlooked, $8.8 trillion industry</h3><p>Last week, a Harvard Business School <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=b92affb03a&amp;e=abab8c0019">paper</a> was circulating, reigniting <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=0085f94dd2&amp;e=abab8c0019">conversation</a> about the commercial potential of open-source software (OSS).</p><p><strong>The authors of the paper estimate that OSS has created roughly $8.8 trillion in economic value.</strong></p><p>That figure came from scanning the codebases of tens of thousands of companies, identifying OSS components, and estimating what it would cost to rebuild them from scratch.</p><p>The research also highlights just how prevalent open-source is:</p><blockquote><p><em>&#8220;<strong>[OSS]</strong> <strong>appears in 96% of codebases</strong>, and some commercial software consists of up to 99.9% freely available OSS.&#8221;</em></p></blockquote><p>OSS is clearly integral to the digital economy. But that hasn&#8217;t always translated into commercial success.</p><p><strong>There have been a few venture-backed winners.</strong> MongoDB hit a <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=83db249514&amp;e=abab8c0019">$1.9B</a> market cap six months post-IPO. Red Hat sold to IBM for <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=07aa2b88db&amp;e=abab8c0019">$34B</a> (bigger than <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=d8861550af&amp;e=abab8c0019">Wiz</a>!). Still, the vast majority of OSS projects have struggled to grow into venture-scale companies.</p><p>I first explored the HBS paper and questions around monetization <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=b2bf7863f7&amp;e=abab8c0019">a couple months ago</a>. But with the cost of writing code collapsing, and software economics shifting fast, it feels like a good time to take a closer look.</p><div><hr></div><h3><strong>02 | </strong>The old OSS playbook</h3><p>Over the past two decades, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=16176fbe8b&amp;e=abab8c0019">95%+</a> of all venture dollars went to non-OSS software products built on proprietary, closed architecture. These products create value by owning the code, controlling the data, and locking in users through polished UX and tightly managed integrations.</p><p>Open-source projects, on the other hand, give code away for free. <strong>These projects typically monetize by charging for extra features</strong> &#8212; hosting, customer support, and other premium services like enterprise controls and advanced security.</p><p>MongoDB is a classic example. Mongo&#8217;s core NoSQL database is open-source, free for anyone to deploy and host in their own environment. But customers can upgrade and pay for <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=6e18207eeb&amp;e=abab8c0019">Atlas</a>, MongoDB&#8217;s fully managed cloud offering. Customers will pay for convenience and support &#8212; in Q4-2024, Atlas <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=2feb16f8bb&amp;e=abab8c0019">generated</a> a $1.2 billion run-rate, up 34% from the year prior.</p><p>This model works &#8211; but only under certain conditions.</p><p><strong>To succeed, OSS companies have to convert usage into reliance, and reliance into revenue.</strong> That usually means becoming essential infrastructure (e.g., databases that become the system of record), a network with very high switching costs, or a central hub that integrates multiple third-party systems.</p><p>However, most OSS projects are lightweight and easy to swap out. If your core asset is free and replaceable, building up that defensibility is tough.</p><p><strong>But today, that&#8217;s changing.</strong></p><div><hr></div><h3><strong>03 | </strong>Three new OSS business models</h3><p>New technical shifts &#8212; like AI automation, modular backends, and blockchain systems &#8212; are unlocking new ways for OSS to capture value.<br><br>There are three commercial paths I see for OSS over the next several years.</p><p></p><blockquote><p><strong>A. Open-core (the familiar model, now upgraded)</strong></p></blockquote><p>The classic model &#8211; offer the core product for free, charge for premium features &#8211; will get supercharged by AI.</p><p><strong>Paid tiers will move from static add-ons, like dashboards and hosting, to more intelligent (and valuable) automation</strong> &#8211; self-healing systems, agents that manage entire workflows, and adaptive interfaces.</p><p>For example:</p><ul><li><p>A free documentation generator can offer a paid AI tier that answers developer questions in plain English, citing relevant sections from your own docs and enriching answers with best practices distilled across its entire user base.</p></li><li><p>An open-source security scanner offers a paid tier where AI automatically flags risky code and makes the fix.</p></li></ul><p>OSS still benefits from community validation and wide distribution. But with AI, the commercial scope of OSS is expanding. In the future, I expect more projects will scale from <strong>useful tools</strong> to <strong>venture-backed platforms</strong>.</p><p></p><blockquote><p><strong>B. Network management as the product</strong></p></blockquote><p><strong>In the coming years, AI agents will become the primary users of software.</strong></p><p>As they take on more tasks &#8212; like sending messages, retrieving data, or executing transactions &#8212; they&#8217;ll need infrastructure to decide <em>which</em> third-party APIs to call and <em>how</em> to route and negotiate those requests.</p><p>Take travel booking, for example. An AI agent might query airline APIs, compare prices, email an itinerary, and complete a transaction &#8212; choosing different providers based on real-time prices, availability, and user preferences.</p><p>This creates a new kind of software marketplace, one where agents navigate networks of interoperable, third-party services.</p><p><strong>Open-source can define the </strong><em><strong>standards</strong></em><strong> for these networks</strong> &#8212; the open protocols, SDKs, and connectors that make them accessible and interoperable across service providers and agents. This open access layer enables broad participation and rapid ecosystem growth.</p><p><strong>But the commercial opportunity lies in managing the network</strong> &#8212; shaping how agents select services, requests are routed, and providers compete to serve agent demand.</p><p>Startups building this layer can monetize through:</p><ul><li><p><strong>Paid tools </strong>that help service providers manage agent traffic, configure access rules, or target specific types of agent requests.</p></li><li><p><strong>Fees on transactions</strong> between agents and third-party services</p></li><li><p><strong>Priority access</strong>, where providers pay for better placement based on performance guarantees.</p></li></ul><p><strong>In some cases, decentralized infrastructure, like blockchains with smart contracts, may be especially well-suited to this model</strong> &#8211; enabling more automated network management, pricing, access, and security without the overhead of a central intermediary. These systems offer transparent, programmable, and permissionless access, a natural fit for more automated, agent-driven workflows.</p><p>This is a big shift. It moves the OSS business model from selling features to managing networks.</p><p>Here, OSS lays the groundwork, creating open access to the network. The business is in orchestrating its flow.</p><p></p><blockquote><p><strong>C. Open-source as a data engine</strong></p></blockquote><p>In this model, open-source software becomes a way to collect valuable data.</p><p>It works like this: A startup releases a small, free tool (like a code library or SDK) that solves a specific problem for developers. Because it&#8217;s useful and free, lots of developers add it to their projects.</p><p>Every time a developer deploys the tool, it quietly collects useful information &#8212; like how a system is running, or how users are engaging. Over time and across multiple applications, this adds up to a unique, dynamic dataset that customers pay to access.</p><p><a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=c95aae01bd&amp;e=abab8c0019">Langfuse</a> is a good example. Their open SDK that captures LLM performance data, which flows into an observability platform users can pay for.</p><p><strong>This model is especially well-suited for the current market.</strong> Demand for this type of data is growing, because AI systems need constant feedback to improve. And advances in encryption now make it safe to extract and transmit this data from private environments.</p><p>In this paradigm, OSS is the sensor that gets embedded. The business model is in the signal it sends back.</p><div><hr></div><h3><strong>04 | </strong>Open-source is eating software</h3><p>At Timespan, we believe open systems are the next paradigm of software, and we specialize in applications built on these new, open protocols.</p><p>Today, coding automation is driving the marginal cost of software development to zero &#8211; lowering barriers to OSS adoption and unlocking new distribution and business models where open-source has a natural edge.</p><p><strong>Two of the fastest-growing startups today, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=0c5fd063ed&amp;e=abab8c0019">Cursor</a> and <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=bc57b9752b&amp;e=abab8c0019">Lovable</a>, started as open-source.</strong></p><p>Cursor <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=15097d9844&amp;e=abab8c0019">launched on Github in 2023</a> as a fork of VS Code, using its open-source base to fast-track adoption and layer in AI workflows &#8211; scaling to $100M ARR in under a year. Lovable began as <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=c0284ffb70&amp;e=abab8c0019">&#8220;GPT Engineer&#8221; in mid-2023</a>, an open-source app generator that built early traction and evolved into a full-stack paid platform with hundreds of thousands of active users.</p><p>I expect to see more open-source projects evolve into venture-scale businesses, powered by intelligent services, agent-based execution networks, and application-native pipelines that generate unique data.</p><p><strong>In all three cases, OSS provides a structural advantage. </strong>It reduces unnecessary costs, increases transparency, and drives adoption through large, engaged developer communities.</p><p>Open-source is eating proprietary software.</p><p><strong>Soon, $8.8 trillion will look small.</strong></p>]]></content:encoded></item><item><title><![CDATA[Part II: The Rules of the Code]]></title><description><![CDATA[Open protocols for the autonomous web]]></description><link>https://www.thetimes.blog/p/part-ii-the-rules-of-the-code</link><guid isPermaLink="false">https://www.thetimes.blog/p/part-ii-the-rules-of-the-code</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Thu, 13 Feb 2025 19:28:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F31f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://thetimesblog.substack.com/p/part-i-100k-emails-that-shaped-the">Part I</a> took us through the Crypto Wars, the fight for the open standards that shaped the early internet. This Part II digs into implications for this coming tech cycle &#8211; as software becomes more autonomous.</p><div><hr></div><h3>01 | Lessons from the Cypherpunks</h3><p>The <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=c395a60483&amp;e=abab8c0019">Cypherpunks</a> &#8211; and their fight for open encryption &#8211; offer some timeless lessons about technology adoption:</p><blockquote><p><strong>01/ Foundational technologies tend to move toward open access and standardization &#8211; </strong><em><strong>especially</strong></em><strong> when their principal value hinges on interoperability.</strong></p></blockquote><p>We saw this with open-source encryption and protocols like HTTP. Because they were free and accessible, they scaled faster than proprietary alternatives. And this broad adoption was essential for their utility &#8211; adding value <em>across an entire network</em>, not just isolated cases.</p><blockquote><p><strong>02/ When a new technology standard gets in the hands of developers and delivers value to end users, attempts to centralize or suppress it tend to fail.</strong></p></blockquote><p>Government efforts to control encryption through (a) international export restrictions and (b) a &#8220;backdoor&#8221; with the Clipper Chip didn&#8217;t work &#8211; developer collaboration and free distribution outpaced these measures.</p><blockquote><p><strong>03/ Once a technological breakthrough becomes a foundational standard &#8211; and costs asymptote &#8211; value accrual moves to the application layer.</strong></p></blockquote><p>In the 1990s, open-source encryption enabled secure transactions at scale, which laid the foundation for commercial applications that could channel and monetize users, transactions, and data.</p><div><hr></div><h3><strong>02 | </strong>Applications come in waves</h3><p>Technology breakthroughs like TCP/IP and the first transformer model are unpredictable.</p><p>Research and developer forums hint at what's mathematically possible; but step-function improvements remain hard to forecast until the prototype is built, computation is run, and benchmarks are tested.</p><p><strong>But new breakthroughs in software often lead to a follow-on wave of open standards, which help get technology into the hands of users.</strong></p><p>As the <em>most commoditized</em> parts of the stack, open-source protocols and standardized frameworks serve as the &#8220;plumbing and wiring&#8221; of digital products &#8211; setting a baseline for performance, guiding how data flows, and shaping how systems interact in any given technological moment.</p><p><strong>They inform what&#8217;s possible &#8211; and what isn&#8217;t.</strong></p><p>For example, early Internet protocols like TCP/IP, HTTP, and SMTP enabled basic digital communication for the first time. But these protocols handled data in a static and siloed manner. Applications were limited to basic content delivery. Interoperability required developers to build custom middleware.</p><p>In the early 2000s, commercial-scale virtualization &#8211; the ability to abstract computing resources from physical hardware &#8211; ushered in the cloud era. A wave of new, open standards &#8211; RESTful APIs, containers, and open-source databases &#8211; streamlined software integrations and scaled complex data processing. This directly enabled platforms like Shopify, Uber, and Netflix.</p><p>But these new protocols lacked inherent mechanisms for security, identity management, quality assurance, and distribution. And so proprietary platforms &#8211; like AWS and iOS &#8211; filled those gaps with managed services, cloud infrastructure, and app stores. (Even today, security, authentication, and performance monitoring are the <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=ba16b9800c&amp;e=abab8c0019">most purchased</a> SaaS tools &#8211; reinforcing the limits of open standards in the cloud era and and the ongoing dependence on proprietary solutions for these critical functions.)</p><p><strong>By studying the open protocols that follow each breakthrough &#8211; their </strong><em><strong>specific </strong></em><strong>capabilities and limitations &#8211; entrepreneurs and investors can anticipate the the business models, distribution vectors, and product primitives that will win.</strong></p><p>And history shows that each wave of open standards builds on the last, increasing the total value of the application layer with every new technology cycle. (A rough analysis suggests a <strong>13-15x</strong> increase in the value of the application layer from the early Internet to the cloud era.<sup>[1]</sup>)</p><p>If this pattern holds, the next wave &#8211; driven by autonomous software &#8211; will introduce new open standards that radically expand the scale and scope of value creation for new startups, beyond what we&#8217;ve seen in prior technology cycles.</p><div><hr></div><h3>03 | Open protocols for the autonomous web</h3><p><strong>Gradually, over the next several years, we expect AI agents to become a core component of the software stack.</strong></p><p>Unlike traditional workflows that rely on human input and predefined rules, AI agents are <strong>autonomous</strong>, performing tasks and interacting with applications, services, and data with minimal human oversight.This introduces two key vulnerabilities that existing standards fail to address:</p><ol><li><p><strong>Unbounded data manipulation:</strong> AI agents autonomously query, generate, and transmit data across systems, often combining information in ways that were not explicitly designed or anticipated.</p></li></ol><ol start="2"><li><p><strong>Static access controls:</strong> Existing permissioning frameworks are static and rule-based, designed for human users rather than adaptive AI agents. Without more dynamic, context-aware protocols, there's a risk of over- or under-restricting access &#8211; leading to data leaks, misuse, and slow performance.</p></li></ol><p>Addressing these challenges requires rethinking our existing protocols. From my conversations with early-stage companies, I'm seeing a fresh wave of open standards coming to market that can support AI-native applications and drive real agentic autonomy. Specifically:</p><ul><li><p><strong>Context-aware permissioning and authentication: </strong>Universal frameworks to verify agent identity <em>and</em> intention before granting access.</p></li><li><p><strong>New, privacy-preserving computation:</strong> Decentralized methods like homomorphic encryption<sup>[2]</sup> that allow agents to process encrypted data without decryption or exposing raw inputs.</p></li><li><p><strong>Automated communication protocols: </strong>Frameworks that enhance data with universal schemas, enabling deeper cross-platform interoperability.</p></li><li><p><strong>Transparent decision logging: </strong>Tamper-proof logs of agent actions and decisions, for auditability and accountability. These can be complemented with tools that auto-detect anomalies and restrict operations in response to unexpected agentic behavior.</p></li><li><p><strong>Open-source models:<sup>[3]</sup> </strong>AI models with open architectures, weights, and training code, which provide more flexibility, lower costs, and reduce lock-in with proprietary systems.</p></li><li><p><strong>Automated observability and remediation:</strong> Open frameworks for real-time monitoring, enabling automated anomaly detection, response, and recovery.</p></li></ul><p>Achieving the above objectives will require a major redesign of data standards &#8212; those that replace<em> <strong>convenience for humans </strong></em>with<strong> </strong><em><strong>interoperability for machines</strong></em><strong> </strong>as the primary success metric.</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F31f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F31f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 424w, https://substackcdn.com/image/fetch/$s_!F31f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 848w, https://substackcdn.com/image/fetch/$s_!F31f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 1272w, https://substackcdn.com/image/fetch/$s_!F31f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F31f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png" width="1456" height="2155" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2155,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:548184,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158387679?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F31f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 424w, https://substackcdn.com/image/fetch/$s_!F31f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 848w, https://substackcdn.com/image/fetch/$s_!F31f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 1272w, https://substackcdn.com/image/fetch/$s_!F31f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36f5eac7-b779-4aba-8823-f854227460cc_1865x2760.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>The Crypto Wars taught us a vital lesson &#8211; during major technological shifts, open standards consistently outpace closed ones. </strong>Markets reward applications built on open, cost-efficient protocols and punish those that try to lock them down.</p><p>We're at the start of a new wave of open standards, as automation and decentralization become the new paradigm.</p><p>The entrepreneurs who understand and adapt to this shift will be the long-term winners in this next cycle. That&#8217;s where I&#8217;m placing my bets.</p><p></p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p><strong>[1] </strong>This analysis estimates the <em>comparative</em> value of the application layer across two key technology eras: the early Internet era (1985-2001) and the cloud era (2002-2020). To do this, I identified the top 15 public technology companies by revenue in each era (excluding companies that did not generate meaningful revenue from software applications). For each company, I examined the 10-K from the year in which its revenue peaked within the era and isolated the portion derived from application-layer products. Revenue from other business lines, such as hardware, consulting, and infrastructure, was excluded. The total application-layer revenue across these 15 companies was then aggregated and compared between the two periods &#8211; 14.1x greater in the cloud era than the early Internet era. <strong>This analysis is designed for relative comparison rather than absolute measurement of the application layer's total market size.</strong> There are several limitations here: (i) company selection is based on revenue rather than market cap or other measures of industry impact; (ii) financial reporting varies across companies and time periods, making precise revenue isolation imperfect; (iii) the long-tail of application-layer businesses are underrepresented due to lack of information or accounting constraints; and (iv) the definition of the application layer evolves, making cross-era comparisons inherently approximate. Despite these limitations, the approach provides a <em>directional</em> sense of how the application layer's relative value has shifted over time.</p><p><strong>[2]</strong> <em>Open-source AI models</em> make their architecture, pre-trained weights, training code, and inference code freely available for use, modification, and redistribution. Unlike proprietary models, which restrict access through APIs, open-source models provide the raw components developers can directly integrate, fine-tune, and deploy within their own applications.</p><p><strong>[3] </strong><em>Homomorphic encryption</em> is an advanced encryption technique that allows computations to be performed directly on encrypted data without decrypting it, preserving privacy and security in data transmission.</p><p><strong>[4]</strong> <em>Secure multiparty computation </em>refers to a cryptographic technique that enables multiple parties to jointly compute a function over their inputs without revealing the inputs to each other.</p><p><strong>[5] </strong><em>JSON-LD</em> is a lightweight format for linking and structuring data using JSON, designed to make data machine-readable across disparate systems while maintaining human readability.</p>]]></content:encoded></item><item><title><![CDATA[Part I: 100k Emails that Shaped the Early Internet]]></title><description><![CDATA[Exploring the Cypherpunks &#8211; pioneers who championed cryptography and open standards during the internet's formative years.]]></description><link>https://www.thetimes.blog/p/part-i-100k-emails-that-shaped-the</link><guid isPermaLink="false">https://www.thetimes.blog/p/part-i-100k-emails-that-shaped-the</guid><dc:creator><![CDATA[Maaria Bajwa]]></dc:creator><pubDate>Tue, 11 Feb 2025 19:33:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yQQC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This <strong>Part I </strong>explores the history of the Crypto Wars &#8211; how battles over encryption shaped early technology adoption and regulation in the 1990s. <strong><a href="https://thetimesblog.substack.com/p/part-ii-the-rules-of-the-code">Part II</a></strong> looks at current investment opportunities in companies leveraging the new, open standards that will come to power autonomous software.</p><div><hr></div><h3><strong>01 | </strong>Decoding the emails that shaped the internet</h3><p>At Timespan, we study historical patterns to anticipate how technology will evolve.</p><p><strong>The Crypto Wars of the 1990s offer a fascinating lens into how transformative technologies emerge despite regulatory resistance. </strong>(Note: &#8220;Crypto&#8221; here refers to cryptography, not cryptocurrency.)</p><p>Strong cryptography &#8211; the mathematical practice of encoding information for privacy &#8211; became the foundation of our modern digital economy. Without it, the basic operations we take for granted would be impossible: online banking, e-commerce, secure messaging, digital signatures, and virtually every sensitive online interaction.</p><p>In the early 1990s, the U.S. government classified strong encryption as a &#8220;munition&#8221; under arms trafficking laws, effectively making it illegal to export this technology outside the U.S. This classification threatened to splinter the emerging internet into a two-tier system: weak encryption for international users and strong encryption for domestic ones.</p><p>Enter the Cypherpunks &#8211; a group of technologists, mathematicians, and privacy advocates who believed privacy and encryption were fundamental rights in the digital age. Notable Cypherpunks include Hal Finney, Adam Back, Julian Assange, Marc Andreessen and (possibly) Satoshi Nakamoto.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yQQC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yQQC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 424w, https://substackcdn.com/image/fetch/$s_!yQQC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 848w, https://substackcdn.com/image/fetch/$s_!yQQC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 1272w, https://substackcdn.com/image/fetch/$s_!yQQC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yQQC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp" width="1425" height="1984" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1984,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:263708,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yQQC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 424w, https://substackcdn.com/image/fetch/$s_!yQQC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 848w, https://substackcdn.com/image/fetch/$s_!yQQC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 1272w, https://substackcdn.com/image/fetch/$s_!yQQC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90ade0b7-a0bb-4007-a17e-48923367d720_1425x1984.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:9210050721.AA01865@soda.berkeley.edu">9210050721.AA01865@soda.berkeley.edu</a>&gt;<sup>[1]</sup></em></p><p><strong>From 1992 to 1998, over 100,000 messages were exchanged across a range of topics</strong> <strong>on the Cypherpunk listserve</strong>. A decade ago someone preserved these emails in eight raw text files<sup>[2]</sup> that were unstructured and difficult to read.</p><p>To make these emails accessible, I used custom parsing algorithms to transform the raw text files into an interactive digital archive:</p><blockquote><p><a href="https://cypherpunk.timespan.vc/">cypherpunk.timespan.vc</a></p></blockquote><p><strong>These archives serve as a unique, digital time capsule of the movement that was the precursor to the modern internet. </strong>We have the opportunity to study these archives and derive valuable insights into patterns of technology adoption during periods of significant innovation.</p><p>In Part I of this blog post we&#8217;ll dig into the history and context of the Crypto Wars, particularly around technology adoption cycles. Part II will break down how we apply the learnings from the past to identify investment opportunities today.</p><div><hr></div><h3><strong>02 | Major technology transformations coalesce around open standards</strong></h3><p>Early Cypherpunk emails underscore a timeless principle &#8212;<strong> new technology requires open, shared frameworks to facilitate developer collaboration and achieve widespread adoption.</strong></p><p>The evolution of two different encryption standards &#8211; RSA and PGP &#8211; reinforces this concept.</p><p>Developed in 1977, the RSA standard (named after its creators Rivest-Shamir-Adleman), marked a breakthrough in public key cryptography. It introduced a method for secure data transmission over public networks, laying the groundwork for encrypted communication over the internet.</p><p><strong>However, RSA&#8217;s initial patent restricted the technology&#8217;s widespread use, limiting adoption.</strong></p><p>In contrast, PGP (Pretty Good Privacy), created by Phil Zimmermann in 1991, democratized encryption by offering an <em>unlicensed, freely available</em> implementation of RSA. Zimmermann published PGP&#8217;s source code online, giving anyone the ability to encrypt their emails, files, and messages.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X84u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X84u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 424w, https://substackcdn.com/image/fetch/$s_!X84u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 848w, https://substackcdn.com/image/fetch/$s_!X84u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!X84u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X84u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg" width="1425" height="1159" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1159,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:456782,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X84u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 424w, https://substackcdn.com/image/fetch/$s_!X84u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 848w, https://substackcdn.com/image/fetch/$s_!X84u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!X84u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb858b17c-9b35-43d4-a05d-80623fc58db7_1425x1159.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:9211161902.AA08397@newsu.shearson.com">9211161902.AA08397@newsu.shearson.com</a>&gt;</em></p><p><strong>The openness of PGP was the catalyst needed to fuel rapid adoption of the RSA cryptography standards.</strong></p><p>The tension between RSA and PGP was frequently discussed on the Cypherpunk listserv, and there was general consensus that foundational technology breakthroughs only become useful if they are open and accessible.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ftkW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ftkW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 424w, https://substackcdn.com/image/fetch/$s_!ftkW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 848w, https://substackcdn.com/image/fetch/$s_!ftkW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 1272w, https://substackcdn.com/image/fetch/$s_!ftkW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ftkW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp" width="1425" height="1731" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1731,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:100208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ftkW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 424w, https://substackcdn.com/image/fetch/$s_!ftkW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 848w, https://substackcdn.com/image/fetch/$s_!ftkW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 1272w, https://substackcdn.com/image/fetch/$s_!ftkW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F120f08df-153f-42b6-89d5-49d230ccba9f_1425x1731.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID:&lt;<a href="mailto:2.2.32.19960409153328.0075c2c0@panix.com">2.2.32.19960409153328.0075c2c0@panix.com</a>&gt;</em></p><p>The Cypherpunks championed cryptography and encryption not for secrecy, but as a trusted tool for securely sharing private information &#8211; like credit card data &#8211; which was essential to the new digital economy.</p><p>They pushed back against U.S. policy, which made it illegal to export strong encryption overseas. For them, these bans hindered the growth of the internet and hurt U.S. global competitiveness.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fevW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fevW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 424w, https://substackcdn.com/image/fetch/$s_!fevW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 848w, https://substackcdn.com/image/fetch/$s_!fevW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 1272w, https://substackcdn.com/image/fetch/$s_!fevW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fevW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp" width="1425" height="1955" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1955,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:111362,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fevW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 424w, https://substackcdn.com/image/fetch/$s_!fevW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 848w, https://substackcdn.com/image/fetch/$s_!fevW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 1272w, https://substackcdn.com/image/fetch/$s_!fevW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75a589d0-d8af-4678-aec3-a527983889e7_1425x1955.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID:&lt;<a href="mailto:199409220341.UAA02254@jobe.shell.portal.com">199409220341.UAA02254@jobe.shell.portal.com</a>&gt;</em></p><p>The Cypherpunks, as the early advocates and builders of encryption, emphasized distribution and championed this technology as a fundamental right, likening it to public infrastructure &#8211; open, essential, and <em>unstoppable</em>.</p><div><hr></div><h3><strong>03 | </strong>How openness outpaces regulation and drives economic growth</h3><p>Until 1996 sending strong encryption overseas was considered a federal crime, the equivalent of exporting weapons to foreign adversaries.</p><p>This became the foundational issue of the Crypto Wars.</p><p>Phil Zimmermann became a central figure in this battle when he released his open-source version of PGP in 1991 and was accused by U.S. officials of violating federal law.<sup>[3]</sup></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SjQU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SjQU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 424w, https://substackcdn.com/image/fetch/$s_!SjQU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 848w, https://substackcdn.com/image/fetch/$s_!SjQU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 1272w, https://substackcdn.com/image/fetch/$s_!SjQU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SjQU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png" width="1425" height="975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:975,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:119292,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SjQU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 424w, https://substackcdn.com/image/fetch/$s_!SjQU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 848w, https://substackcdn.com/image/fetch/$s_!SjQU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 1272w, https://substackcdn.com/image/fetch/$s_!SjQU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2edcddf3-9327-4e3b-b2ea-d6a38217c34f_1425x975.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:9302132122.AA13118@nexsys.nexsys.net">9302132122.AA13118@nexsys.nexsys.net</a>&gt;</em></p><p>Zimmerman and associates, including many from the Cypherpunk community, were subject to a 3-year investigation that included subpoenas, Senate hearings, and threats of jail time.</p><p>However, by 1993, the U.S. policy of regulating encryption as a weapon began to unravel, given the rapid adoption of open encryption globally.</p><p>The Clinton administration made a few final attempts to control this technology. First, through a massive PR campaign, portraying encryption and cryptography as a tool for, &#8220;pornography, terrorists, tax evaders, and criminals.<sup>[4]</sup><strong> </strong>Second, through a centralized alternative. The U.S. government announced the Clipper Chip initiative, a public-private partnership between the NSA and AT&amp;T, where chips in AT&amp;T&#8217;s devices would use cryptography co-developed with the NSA, giving the government &#8220;back door&#8221; access to private communication.<sup>[5]</sup></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CKJu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CKJu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 424w, https://substackcdn.com/image/fetch/$s_!CKJu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 848w, https://substackcdn.com/image/fetch/$s_!CKJu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 1272w, https://substackcdn.com/image/fetch/$s_!CKJu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CKJu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp" width="1425" height="1325" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1325,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75108,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CKJu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 424w, https://substackcdn.com/image/fetch/$s_!CKJu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 848w, https://substackcdn.com/image/fetch/$s_!CKJu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 1272w, https://substackcdn.com/image/fetch/$s_!CKJu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd6ee980-7206-4e4e-adf8-56f7edb16711_1425x1325.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The Clipper Chip initiative never took off, and by the mid-1990s, the Cypherpunks had won the Crypto Wars in the court of public opinion.</p><p>In 1996, President Clinton signed Executive Order 13026, which shifted the regulatory framework of commercial encryption from munitions to commerce. The investigation into Zimmerman was dropped, and export controls on encryption were dismantled.<sup>[6]</sup></p><p><strong>This history shows &#8211; new technology, especially when cheap and widely accessible, does not conform to policy frameworks that try to suppress or control.</strong></p><p>Regulatory attempts to limit or slow new innovation ultimately hurts domestic economic interests, and over time governments are forced to change these policies as the consequences become too significant to avoid.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GhTo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GhTo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 424w, https://substackcdn.com/image/fetch/$s_!GhTo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 848w, https://substackcdn.com/image/fetch/$s_!GhTo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 1272w, https://substackcdn.com/image/fetch/$s_!GhTo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GhTo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png" width="1425" height="1012" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1012,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:155273,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GhTo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 424w, https://substackcdn.com/image/fetch/$s_!GhTo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 848w, https://substackcdn.com/image/fetch/$s_!GhTo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 1272w, https://substackcdn.com/image/fetch/$s_!GhTo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f679b5f-1d1f-4652-9dde-05920cf8a026_1425x1012.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:9310091757.AA04856@columbine.cgd.ucar.EDU">9310091757.AA04856@columbine.cgd.ucar.EDU</a>&gt;</em></p><p>While U.S. policymakers focused on regulating cryptography as a weapon that could be used for criminal activities, the Cypherpunks knew cryptography was needed in order to unlock new markets on the internet &#8211; e-commerce, email, cloud computing, data storage, and more.</p><p><strong>This lesson &#8211; that open-source, foundational technologies typically outpace regulation and centralized alternatives &#8211; is relevant today.</strong></p><p>When new innovations provide solutions that are <em>better-faster-cheaper</em>, they drive mainstream adoption, making it prudent for governments and businesses to adapt, rather than suppress.</p><div><hr></div><h3><strong>04 | </strong>Cypherpunk cypherbites</h3><p>While the Cypherpunks were pioneering internet privacy, their mailing list wasn't all cryptographic algorithms and policy debates &#8211; the archives reveal the Cypherpunks were just like us.</p><p>Here are some cherry picked emails for your amusement.</p><p></p><p><strong>Julian Assange is very particular about his parties</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kp2K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kp2K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 424w, https://substackcdn.com/image/fetch/$s_!kp2K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 848w, https://substackcdn.com/image/fetch/$s_!kp2K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 1272w, https://substackcdn.com/image/fetch/$s_!kp2K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kp2K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp" width="1425" height="1315" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1315,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:42594,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kp2K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 424w, https://substackcdn.com/image/fetch/$s_!kp2K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 848w, https://substackcdn.com/image/fetch/$s_!kp2K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 1272w, https://substackcdn.com/image/fetch/$s_!kp2K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcab8023d-37a8-4413-bb0f-92c9b793d9c1_1425x1315.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID:&lt;<a href="mailto:199512300046.LAA16884@suburbia.net">199512300046.LAA16884@suburbia.net</a>&gt;</em></p><p></p><p><strong>Marc Andreessen was responding to internet trolls back in 1994</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p9ZM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p9ZM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 424w, https://substackcdn.com/image/fetch/$s_!p9ZM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 848w, https://substackcdn.com/image/fetch/$s_!p9ZM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 1272w, https://substackcdn.com/image/fetch/$s_!p9ZM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p9ZM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp" width="1425" height="1315" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1315,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:37954,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p9ZM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 424w, https://substackcdn.com/image/fetch/$s_!p9ZM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 848w, https://substackcdn.com/image/fetch/$s_!p9ZM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 1272w, https://substackcdn.com/image/fetch/$s_!p9ZM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b620982-ec99-45b2-a269-692c58c45989_1425x1315.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:199412112227.WAA23971@neon.mcom.com">199412112227.WAA23971@neon.mcom.com</a>&gt;</em></p><p></p><p><strong>Clinton&#8217;s first email address: <a href="mailto:75300.3115@compuserve.com">75300.3115@compuserve.com</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xRlO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xRlO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 424w, https://substackcdn.com/image/fetch/$s_!xRlO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 848w, https://substackcdn.com/image/fetch/$s_!xRlO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 1272w, https://substackcdn.com/image/fetch/$s_!xRlO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xRlO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp" width="1425" height="1583" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1583,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:47440,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xRlO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 424w, https://substackcdn.com/image/fetch/$s_!xRlO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 848w, https://substackcdn.com/image/fetch/$s_!xRlO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 1272w, https://substackcdn.com/image/fetch/$s_!xRlO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec08aa22-2f0b-4ae4-b591-f5efe480446c_1425x1583.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:9302032358.AA00498@xanadu.xanadu.com">9302032358.AA00498@xanadu.xanadu.com</a>&gt;</em></p><p></p><p><strong>Bitcoin was almost named CRASH</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FTTS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FTTS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 424w, https://substackcdn.com/image/fetch/$s_!FTTS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 848w, https://substackcdn.com/image/fetch/$s_!FTTS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 1272w, https://substackcdn.com/image/fetch/$s_!FTTS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FTTS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png" width="1425" height="1108" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1108,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:93964,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FTTS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 424w, https://substackcdn.com/image/fetch/$s_!FTTS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 848w, https://substackcdn.com/image/fetch/$s_!FTTS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 1272w, https://substackcdn.com/image/fetch/$s_!FTTS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32ce5179-f8d3-4818-b529-0f2e8fc4d78b_1425x1108.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:9312070630.AA28857@jobe.shell.portal.com">9312070630.AA28857@jobe.shell.portal.com</a>&gt;</em></p><p></p><p><strong>Sung to the tune of &#8220;Santa Claus is Coming to Town&#8221;</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2-9r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2-9r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 424w, https://substackcdn.com/image/fetch/$s_!2-9r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 848w, https://substackcdn.com/image/fetch/$s_!2-9r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 1272w, https://substackcdn.com/image/fetch/$s_!2-9r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2-9r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp" width="1425" height="2027" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2027,&quot;width&quot;:1425,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39850,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158385209?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2-9r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 424w, https://substackcdn.com/image/fetch/$s_!2-9r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 848w, https://substackcdn.com/image/fetch/$s_!2-9r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 1272w, https://substackcdn.com/image/fetch/$s_!2-9r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e2882a-0b51-4cb8-aed9-8b81e9f2e4d3_1425x2027.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>ID: &lt;<a href="mailto:199712251031.LAA09046@basement.replay.com">199712251031.LAA09046@basement.replay.com</a>&gt;</em></p><div><hr></div><p>Please explore the <a href="http://cypherpunk.timespan.vc/">Cypherpunk archives</a> and let me know what you discover!</p><p>email: <a href="mailto:maaria@timespan.vc">maaria@timespan.vc</a> <strong>||</strong> twitter: <a href="https://x.com/maariabajwa">@maariabajwa</a> <strong>||</strong> linkedin: <a href="https://www.linkedin.com/in/maariab/">@maariab</a> <strong>||</strong> github: <a href="https://github.com/iloveburritos">@iloveburritos</a></p><p>I&#8217;ll repost my favorite replies.</p><p></p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p><strong>[1]</strong> To read a message in its entirety, search &#8220;<em>ID: &lt;<a href="mailto:9210050721.AA01865@soda.berkeley.edu">9210050721.AA01865@soda.berkeley.edu</a>&gt;&#8221; </em>in the search box on <a href="https://cypherpunk.timespan.vc/">https://cypherpunk.timespan.vc/</a></p><p><strong>[2]</strong> <a href="https://cypherpunks.venona.com/date/">https://cypherpunks.venona.com/date/</a></p><p><strong>[3]</strong> A recent appeals court decision ruled OFAC sanctions against privacy protocol Tornado Cash were unlawful because open source code and immutable smart contracts cannot be considered legal property because they are not capable of being owned. While these legal decisions will take many years to finalize, they are indicators of the changing regulatory landscape in a technology market that is changing rapidly. <a href="https://storage.courtlistener.com/recap/gov.uscourts.txwd.1211705/gov.uscourts.txwd.1211705.99.0.pdf">https://storage.courtlistener.com/recap/gov.uscourts.txwd.1211705/gov.uscourts.txwd.1211705.99.0.pdf</a></p><p><strong>[4]</strong> More recently we have seen similar claims made against cryptocurrencies and blockchain technology. "At a hearing of the Senate Banking, Housing, and Urban Affairs Committee, U.S. Senator Elizabeth Warren (D-Mass.) called out crypto&#8217;s use by terrorists, ransomware gangs, drug dealers, and rogue states to launder funds." <a href="https://www.warren.senate.gov/newsroom/press-releases/icymi-at-hearing-warren-warns-about-cryptos-use-for-money-laundering-by-rogue-states-terrorists-and-criminals">https://www.warren.senate.gov/newsroom/press-releases/icymi-at-hearing-warren-warns-about-cryptos-use-for-money-laundering-by-rogue-states-terrorists-and-criminals</a></p><p><strong>[5]</strong> The UK government recently ordered Apple to create a back door allowing them to retrieve all encrypted cloud content, for users around the world. This order would apply to users who have opted into Apple&#8217;s end-to-end encrypted cloud service. <a href="https://www.washingtonpost.com/technology/2025/02/07/apple-encryption-backdoor-uk/">https://www.washingtonpost.com/technology/2025/02/07/apple-encryption-backdoor-uk/</a></p><p><strong>[6]</strong> <a href="https://www.govinfo.gov/content/pkg/WCPD-1996-11-18/pdf/WCPD-1996-11-18-Pg2399.pdf">https://www.govinfo.gov/content/pkg/WCPD-1996-11-18/pdf/WCPD-1996-11-18-Pg2399.pdf</a></p>]]></content:encoded></item><item><title><![CDATA[Where is AI Going? A Conversation with Deon Nicholas]]></title><description><![CDATA[For this issue, I sat down with Deon Nicholas, co-founder of Forethought, to discuss the DeepSeek news, where AI is moving, and how to think about the application layer in the age of intelligent compute.]]></description><link>https://www.thetimes.blog/p/where-is-ai-going-a-conversation</link><guid isPermaLink="false">https://www.thetimes.blog/p/where-is-ai-going-a-conversation</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Fri, 31 Jan 2025 20:15:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!olBq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For this issue, I sat down with <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=65db693a3c&amp;e=abab8c0019">Deon Nicholas</a>, co-founder of <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=6ca017623d&amp;e=abab8c0019">Forethought</a>, to discuss the DeepSeek news, where AI is moving, and how to think about the application layer in the age of intelligent compute.</p><p><strong>Deon&#8217;s company Forethought builds advanced AI agents for customer support teams. Today, the company handles over one billion customer interactions each month for companies like Airtable and Upwork.</strong></p><p>Before Forethought, Deon built infrastructure and products at Facebook, Palantir, and Dropbox. He&#8217;s been featured in <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=fe43f1d725&amp;e=abab8c0019">Fast Company</a>, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=632c2b6fde&amp;e=abab8c0019">CNBC</a>, and <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=4ef42d7379&amp;e=abab8c0019">Bloomberg</a>. And &#8211; my favorite part of his bio &#8211; he&#8217;s a world finalist in competitive programming.</p><p>I was lucky to back Forethought early in 2020. I&#8217;ve loved watching Deon scale both the company and his vision ever since.</p><p>I hope you enjoy the conversation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!olBq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!olBq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 424w, https://substackcdn.com/image/fetch/$s_!olBq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 848w, https://substackcdn.com/image/fetch/$s_!olBq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 1272w, https://substackcdn.com/image/fetch/$s_!olBq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!olBq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png" width="456" height="261.1978021978022" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:834,&quot;width&quot;:1456,&quot;resizeWidth&quot;:456,&quot;bytes&quot;:858719,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158384563?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!olBq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 424w, https://substackcdn.com/image/fetch/$s_!olBq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 848w, https://substackcdn.com/image/fetch/$s_!olBq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 1272w, https://substackcdn.com/image/fetch/$s_!olBq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9823a25-8c60-4726-b103-eca31dd59e5e_1980x1134.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3>01 | DeepSeek</h3><p><strong>Let&#8217;s start with what&#8217;s on everyone&#8217;s mind &#8212; DeepSeek. You recently shared your perspective on <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=432b2480bc&amp;e=abab8c0019">Bloomberg</a>, and here&#8217;s what I took away:</strong></p><ul><li><p><strong>New engineering techniques are making it much more efficient to build and run AI models.</strong></p></li><li><p><strong>This challenges the cost structure and defensibility of closed-model providers like OpenAI and Anthropic.</strong></p></li><li><p><strong>Over time, model costs will shift from training to inference.</strong></p></li><li><p><strong>As models get cheaper and more flexible, this will drive more value to the application layer.</strong></p></li></ul><p><strong>Do I have that right? What do you see as the biggest implications from DeepSeek&#8217;s release?</strong></p><p>The first part, around technological advancements, is the most interesting.</p><p>DeepSeek achieved some real engineering breakthroughs, leveraging chain-of-thought and reinforcement learning. Nothing fancy, just extreme optimization &#8211; and they were able to unlock emergent reasoning capabilities on par with OpenAI's o1 model.</p><p>I've been a fan of reinforcement learning for years &#8211; my first AI internship at the Alberta Machine Intelligence Institute was in reinforcement learning. I&#8217;ve been waiting for it to make its debut in LLMs, and with advancements like <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=b78def007e&amp;e=abab8c0019">AlphaGo</a>, it's all converging in an exciting way.</p><p>Things are directionally becoming more efficient. But, in reality, it took hundreds of millions of dollars in research to achieve o1 level reasoning &#8211; and only after that could DeepSeek achieve similar performance with the reported $5M investment in training.</p><p>So the market selloff was probably an overreaction.</p><p>Jevon&#8217;s Law still applies &#8211; greater efficiency will only drive more consumption. Sure, we&#8217;ll spend less<em>per unit</em>of compute. But, overall spend will grow as we run more and more computations. There&#8217;s just so much more progress to make.</p><p><strong>But how does this change the incentives for closed-model providers? NVIDIA is one thing, but closed-model research labs are pouring billions into building the newest model. If new models can be easily distilled and replicated, does that reduce the incentive to keep innovating?</strong></p><p>I agree. NVIDIA is safe for now &#8211; we're not moving away from their hardware anytime soon.</p><p>The bigger question is whether this erodes the moat of closed models like OpenAI and Anthropic? I&#8217;ve said for years that OpenAI has no real moat &#8211; once a model is out, replication is relatively easy.</p><p>That said, they still have lots of room to keep innovating.</p><p>Right now, it's language models, but there&#8217;s Sora and video models &#8211; there's always something new. Their lead won't disappear, but competitors will always be nipping at their heels.</p><p>Long-term, open source might win.</p><p>I don&#8217;t see OpenAI becoming a fairly valued, trillion-dollar company based solely on its models. Maybe they&#8217;ll generate massive revenue, but obsolescence and the need to constantly innovate will compress margins.</p><p>The real question is whether OpenAI shifts more to building application-layer products. Let&#8217;s remember, ChatGPT&#8217;s real success came from being a consumer product. Long-term, I think the application layer is where most value will accrue.</p><p>But what do <em>you </em>think about the incentive structure?</p><p><strong>I've been thinking about the open-source versus closed debate for a while. We&#8217;ll get more into that. But one thing seems clear &#8211; raw model performance alone feels like a tenuous moat.</strong></p><p><strong>I&#8217;d also expect OpenAI to shift toward more applications and tooling, leveraging its very real distribution edge and the ability to build a tight feedback loop between usage / input data and model development.</strong></p><p><strong>But if a closed-model provider built, for example, a new coding automation tool on top to try and monetize, I think a new startup with a superior product and deep focus could...</strong></p><p>Grow to be competitive. Yeah, I agree.</p><div><hr></div><h3>02 | Open-source vs closed models</h3><p><strong>The open-source versus closed model debate has been around for a while.</strong></p><p><strong>In 2023, the leaked Google memo </strong><em><strong>&#8220;<a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=079c8dc19d&amp;e=abab8c0019">We have no moat and neither does OpenAI</a>&#8221;</strong></em><strong> argued that open models are faster, more customizable, and will catch up. New <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=a6e37088be&amp;e=abab8c0019">research</a> shows open-source is now only months behind closed models.</strong></p><p><strong>So what&#8217;s your take &#8211; does DeepSeek change the game? Or is this just more proof of the writing that&#8217;s been on the wall?</strong></p><p>DeepSeek certainly did a lot of clever research. But now that&#8217;s all open-source. It&#8217;s available research. It&#8217;s stuff OpenAI and others will take and incorporate.</p><p>But it&#8217;s important to keep in mind &#8211; DeepSeek is not matching OpenAI&#8217;s o3, which is the closest we've seen to AGI. So OpenAI is still 6-12 months ahead.</p><p>The question is &#8211; <em>will that lead matter? </em>Or will success come down to something else, like distribution or brand?</p><p>Closed model providers still hold a lot of brand value, especially with stodgy companies that are still on IBM or work with Accenture or Bain to incorporate AI. There&#8217;s a big enterprise business to be built purely on that recognition. OpenAI is positioned as the world&#8217;s best AI expert, and can charge millions to implement their models or fine-tune custom models for the enterprise segment. Big companies are already paying OpenAI for this.</p><p>But when it comes to mid-market adoption, that&#8217;s a different challenge. Competing there requires strong UI and product expertise. The mid-market will demand products that are more specialized.</p><p><strong>Let me paint two extremes. The first scenario is an oligopoly of 3-4 closed-source models powering everything.</strong></p><p><strong>The other scenario is a completely commoditized model layer &#8211; cheap, open-source models that everyone can fork and build on.</strong></p><p><strong>Where do you think we land on that spectrum &#8211; and more importantly, why?</strong></p><p>Models will inevitably get commoditized.</p><p>The concept of language models &#8211; predictive systems fulfilling prompts &#8211; has existed forever. But what few predicted, despite it being mathematically provable for a long time, is that a model could encode enough logic to power a new computing paradigm. That&#8217;s what ChatGPT revealed, why it broke the internet.</p><p>Language completion costs will likely trend toward zero over time. Over how long? Who knows. But open-source models like DeepSeek accelerate that downward pressure.</p><p>But language is just one model of computing &#8211; there&#8217;s image and video. Video models, for example, must create a whole new way to encode physics, just like LLMs embedded linguistic logic and reasoning. Protein folding is its own language, meaning genomics will have its own billion-dollar model.</p><p>How many domains will follow?</p><p>Does OpenAI stay ahead just by being 6-12 months in front? Each breakthrough &#8211; language, reasoning, protein models, AGI &#8211; is worth billions.</p><p><strong>Maybe one way to think about this is &#8211; will the pace of model performance soon plateau?</strong></p><p><strong>If it does, then that probably lends to the open-source paradigm, since any further research would only yield incremental gains that get quickly commoditized.</strong></p><p><strong>But if we expect performance to keep accelerating over the next 3-5 years, this would favor research labs with capital, scale, and data, since the potential rewards from continuous investment are huge. The payoff is there if you can stay ahead.</strong></p><p>I don't have a crystal ball, but my intuition leans toward the idea that we&#8217;re still at the beginning. We thought we'd reach AGI soon after GPT-3 and GPT-3.5, but we're still not there &#8211; although maybe OpenAI has reached AGI in private.</p><p>The challenge is that building these models still takes millions in R&amp;D, and then they quickly get commoditized. So, is it worth the investment? Will OpenAI ever be profitable as a research company? Probably not.</p><p>But it's still a paradigm-shifting business, generating revenue, and creating value. They can expand into many domains &#8211; video, protein modeling, genomics, energy &#8211; which buys them more time, more brand recognition, and new opportunities.</p><div><hr></div><h3>03 | The pace of AI innovation</h3><p><strong>Regardless of which paradigm wins, AI performance is advancing fast. OpenAI&#8217;s o3 <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=5328d76e2e&amp;e=abab8c0019">just hit</a> 87.5% on the ARC-AGI benchmark &#8211; up from 55% just months ago.</strong></p><p><strong>When you fast forward 2 years, what type of performance do you think we&#8217;ll see?</strong></p><p>We haven't seen AGI yet &#8211; maybe within the next 5 years, maybe sooner. So what does that do?</p><p>You can argue we get to a singularity, and then human-led innovation is done. But that seems unlikely.</p><p>I think each new model breakthrough will open a whole bunch of new applications. For example, we still don&#8217;t have a universally applicable AI tutor. While we have self-driving cars, we need further advancements in robotics. The pace of innovation won't stop as capabilities grow &#8211; if anything, it will accelerate.</p><p>Intelligence could follow Moore's Law, doubling every year or year and a half. And each breakthrough will give rise to new billion-dollar industries.</p><p><strong>We've talked about model performance, but there are also a lot of other constraints around adoption &#8211; energy, cost, human desire to use these tools. Where do you see the biggest challenges to adoption today?</strong></p><p>Maybe this sounds strange, but I think the biggest bottleneck is creativity, imagination, and focus.</p><p>Take customer service. The biggest issue is that there are many, many &#8220;AI&#8221; solutions, and so many are ineffective. At Forethought, we have a fully agentic model that our customers think is magic. But there are hundreds of low-quality solutions out there &#8211; outdated decision-tree chatbots or GPT-based systems that just scrape knowledge articles.</p><p>These dilute the message, and it takes time to break through the noise.</p><p>Most customers don&#8217;t even know what&#8217;s possible. They&#8217;re 1-3 years behind on what they think is achievable.</p><p>Remember, building an AI company is hard. It&#8217;s like &#8220;NP-complete&#8221; in computer science &#8211; this concept that verifying a solution is quick, but <em>finding</em> the solution is computationally intense and difficult.</p><p>It&#8217;s like a &#8220;startup-complete&#8221; problem &#8211; a successful AI company requires not just good technology but also distribution, marketing, and product packaging. It takes time to get enough iterations and reach enough people to make an impact. Many startups overlook this.</p><div><hr></div><h3>04 | The application layer</h3><p><strong>As foundation models improve and software is able to handle increasingly complex tasks, how do you think about the role of the application layer?</strong></p><p><strong>If I'm an entrepreneur developing my product strategy, what can I build at the product level &#8211; before any real customer traction &#8211; that could create durable moats and differentiation?</strong></p><p>My answer is simple &#8211; <em>does the product work?</em></p><p>Take AI sales development representatives (SDRs), for example. There are probably 50+ companies in this space trying to automate outbound sales, but how many actually generate pipeline and book qualified meetings? Most solutions just regurgitate knowledge or craft basic emails.</p><p>But a truly autonomous AI agent can handle tasks like LinkedIn research, objection handling, follow-ups, and meeting scheduling &#8211; executed with context and proper timing.</p><p>This requires a lot of agentic capabilities that most companies fail to build. It&#8217;s so easy right now to just build RAG on your emails. The barriers to entry are so low, many default to the lowest common denominator.</p><p>I&#8217;m an angel investor in a handful of application-layer companies. I always ask how they're using agentic infrastructure? What orchestration frameworks have they tried? Have they implemented new training techniques from esoteric research papers? Can the product independently initiate tasks and complete long-running actions? If they&#8217;re just building with GPT, that&#8217;s a red flag.</p><p>The top 5% of teams get this right. They have the research chops and the product instinct to package technology for users.</p><p>But at the core, it always comes down to &#8211; does the product work?</p><div><hr></div><h3>05 | New distribution models</h3><p><strong>Maaria (my partner at Timespan) and I believe the best products are designed for distribution from day one &#8211; where the growth strategy is deeply embedded in how the product is used. That principle guides a lot of our investment thinking. It seems even more important as the field becomes more crowded.</strong></p><p><strong>What new distribution methods do you see as software becomes more autonomous? How will AI-powered products reach users in ways that weren&#8217;t possible before?</strong></p><p>Boring answer, you&#8217;ve probably heard this a thousand times &#8211; pricing. Don&#8217;t even talk about selling seats. Quantify the outcome your product delivers, and price based on that.</p><p>I invested in <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=e78015c321&amp;e=abab8c0019">Recraft</a>, an AI image generator, and when the founder wanted to use a monthly pricing model, I suggested charging per image generated instead. They went that route and are seeing incredible growth.</p><p><strong>In B2B SaaS, selling to enterprises or mid-market is challenging because you often need to sell before gaining access to data, making it hard to really prove value.</strong></p><p><strong>The sales process is still sequential &#8211; pitch the value prop, run a trial, prove it, then deploy. I&#8217;d love to see this whole chain disrupted.</strong></p><p><strong>Are there new ways to integrate AI to prove the value prop more quickly, to reduce friction in adopting these tools?</strong></p><p>I completely agree. At Forethought, one of the big changes we made to distribution last year was launching a free trial version for everyone, whether it&#8217;s for knowledge-based RAG or agentic workflows. It takes less than 24 hours to integrate, about 8 hours on average.</p><p>Our win rates went up when we started doing this.</p><p>Speed of deployment is a huge advantage in AI. It all comes down to one question &#8211; <em>does it work?</em> If you can prove it works quickly, in a simple, fun, or clever way, you'll be ahead of your competitors.</p><p><strong>Maybe there&#8217;s an idea here &#8211; an agentic sales motion where AI-driven systems integrate with APIs and data in real time, delivering value as part of the pitch itself. In the sales pitch, you talk about the value, and by the end, you&#8217;re proving it.</strong></p><p>Oh, man! I actually did something similar today, but not with APIs. I used web scraping. I told them, "We can launch a bot for your website now, just off your site." And actually pulled it off in the meeting with our technology.</p><div><hr></div><h3>06 | Building trust through UX</h3><p><strong>Another aspect of the application layer I&#8217;ve been thinking about is how UX will evolve.</strong></p><p><strong>For any technology to scale, users need to trust and love the experience. Even if AI agents can handle tasks, people won&#8217;t fully hand over the keys without confidence in the system.</strong></p><p><strong>Forethought has been great at this. Your product is really nuanced in how it prioritizes trust, how it augments the end users &#8211; customer support agents. It integrates seamlessly into their workflows, rather than replacing them.</strong></p><p><strong>Do you have any frameworks for thinking about how UX will evolve? Where will the line be between tasks we </strong><em><strong>want</strong></em><strong> to keep human and tasks we&#8217;ll be comfortable letting machines take over?</strong></p><p>That&#8217;s a deep question. For now, I think the key idea is "show your work."</p><p>As things become more autonomous, people need to trust the system, and that&#8217;s hard when they don&#8217;t understand how it arrived at a decision. Even if the AI gives the correct answer, people want to know how it got there.</p><p>We learned this the hard way at Forethought.</p><p>At first, we&#8217;d just give the answer, but customers wanted more control. They wanted to know the data sources, understand how the decision was made. Giving people control, even if the AI is doing most of the work, will be a big UX principle moving forward.</p><p>Another big point is, instead of building a whole new app, bring the intelligence to where the user is already working. At Forethought, we didn&#8217;t try to rebuild Zendesk&#8217;s ticketing system &#8211; we brought intelligence into it, subtly, while they were working. This will become more important over time.</p><p>Another point is to rely on human feedback for system improvement. Allow users to interact with AI, within their workflow, through natural language, instead of relying on pre-programmed decision trees. This creates positive feedback loops, so the AI gets better over time.</p><p>Many are not doing this yet, but I expect it will become more common.</p><p><strong>There are so many possibilities &#8211; interfaces shifting from visual dashboards to text-based inquiry, coding automation enabling non-technical users to customize their own UX.</strong></p><p><strong>It feels like these aspects could really make or break a product experience, even if the raw agentic capabilities are there.</strong></p><p>Exactly. And that&#8217;s what I meant by "startup-complete." You still need to build a great user experience, figure out distribution, and build a sustainable business. Having the best LLM won&#8217;t do it alone &#8211; that will commoditize.</p><p>It&#8217;s so hard to build a successful AI company. Even though tools have gotten easier to use &#8211; maybe <em>because</em> the tools are easier to use &#8211; creating a sustainable business requires so much more than just a fancy demo.</p><p><strong>Totally. I suspect this will play out like past tech cycles.</strong></p><p><strong>In the early days of the Internet, open protocols like HTTP, SMTP, and TCP/IP enabled a wave of digital businesses to be built. These open standards were the foundational building blocks of the web.</strong></p><p><strong>Some companies succeeded, like Amazon, but there was also a lot of over-investment and hype. A lot of companies failed because they couldn't figure out product.</strong></p><p><strong>We&#8217;re in a similar moment now &#8211; there&#8217;s a rush of capital because no one wants to miss out on </strong><em><strong>the</strong></em><strong> AI company that dominates a particular sector.</strong></p><p><strong>But, as you mentioned, complexities around UX, distribution, and product haven&#8217;t gotten any easier. It&#8217;ll take time for the right form factors to emerge &#8211; patience, focus, and real product taste will separate the winners from the losers.</strong></p><p><strong>I&#8217;m not sure there&#8217;s always a first mover advantage right now.</strong></p><p>Absolutely. For any startup, even if you build the best product, there&#8217;s still a high probability of failure.</p><p>With AI products, it&#8217;s even harder to succeed.</p><p>You&#8217;re dealing with new workflows and big questions about how humans interact with machines. The core infrastructure is shifting so fast. It's a tougher problem. The quality of the team and the ability to build real, scalable products are still key. Despite the noise, the fundamentals haven't changed.</p><div><hr></div><h3><strong>07 | </strong>Rethinking the entrepreneur-investor relationship</h3><p><strong>As you think about the entrepreneur-investor relationship, what do you think needs to change from the last tech cycle? Do you think that there's a way that investors and entrepreneurs can interact better or differently?</strong></p><p>I have a lot of thoughts on this, not necessarily related to AI.</p><p>A few things come to mind. My best investors are the ones who get in the weeds.</p><p>One investor on my cap table, <a href="https://timespan.us22.list-manage.com/track/click?u=1ef09c473dad0e37c3fb15e20&amp;id=ac62df3a27&amp;e=abab8c0019">Vanessa Larco</a> at NEA, is a world-class product thinker. She&#8217;d sit down with me and my PM and nail our product strategy. Another investor with go-to-market experience helped us with a full sales training session. With you as well, I&#8217;d see the tangible value-add from a customer intro, candidates, or helping close capital. Those kinds of investors, who really care and get involved, add the most value.</p><p>Most VCs just give you money and opine in board meetings. If a VC can send you one customer, one candidate, or have one conversation around capital, they're in the top 80th percentile. That&#8217;s the sad reality.</p><p>The hype cycles we see now are great for short-term excitement, but in the long run, building a company comes down to the fundamentals. It&#8217;s about those tough moments, like losing your first customer or struggling to raise your next round, when you need a backer who truly understands the vision and sticks by you.</p><p>We need more VCs who are genuinely involved and care about the journey, not just chasing the next big thing.</p><p><strong>I agree. Venture has become so transactional. People are driven by fear &#8211; of missing out, of something not working and looking dumb &#8211; so they spread themselves thin trying to hit the one thing they can point to. This dilutes the focus it takes to build something important.</strong></p><p><strong>You see it with portfolios that are 50+ companies &#8211; how can you dig in and be thoughtful for each of those companies, or make any of those positions meaningful for your fund?</strong></p><p><strong>I do see a real changing of the guard in VC, of new investors starting new shops, trying to solve this problem, which is amazing.</strong></p><p>I agree.</p><p>Thank you for having me. I appreciate you letting me hop on and ramble about AI. It's always fun, and always great to catch up with you.</p><p><strong>You too, as always.</strong></p>]]></content:encoded></item><item><title><![CDATA[Machines Learned to Read. Now, Can They Stand to Reason?]]></title><description><![CDATA[Exploring how intelligent, application-layer architecture in can create value beyond foundational AI models.]]></description><link>https://www.thetimes.blog/p/machines-learned-to-read-now-can</link><guid isPermaLink="false">https://www.thetimes.blog/p/machines-learned-to-read-now-can</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Mon, 18 Nov 2024 02:56:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Vpq7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Special thanks to Jared White, CEO and co-founder of <a href="https://www.matey.ai/">Matey.ai</a>, a Timespan portfolio company for providing feedback on this article. Matey is pushing the boundaries of what&#8217;s possible with unstructured data through its intelligent software.</p><div><hr></div><h3><strong>01 | </strong>How machines learned to read</h3><p><strong>At Timespan Ventures, we think about technology on a timeline, through a historical lens.</strong></p><p>This helps <strong>(i)</strong> filter out incremental products, those recycling old primitives; <strong>(ii)</strong> anticipate new software paradigms; and <strong>(iii)</strong> visualize how those breakthroughs need to be productized to unlock new categories of value.</p><p><strong>One shift we are investing behind is the ability for AI to process unstructured, multimodal data.<sup>[1]</sup></strong></p><p>Early machine learning systems like recurrent neural networks (RNNs) processed data sequentially, analyzing each component in a fixed order. This limited their ability to retain information and understand complex relationships across large data sets.</p><p>To overcome this, data had to be pre-formatted and fed to computers in a specific order.</p><p>However, in the 2010s, two major breakthroughs in parallel processing<sup>[2]</sup><strong> </strong>enabled computers to move beyond these constraints:</p><ol><li><p><strong>Unlocking scale:</strong> GPUs, originally designed for graphic rendering in gaming, were adapted to process large datasets using thousands of parallel cores.<sup>[3]</sup> This new chip design allowed developers to dramatically scale the amount of data and compute used in model training, which resulted in larger and more sophisticated neural networks. The chart below shows the exponential growth in computation used to train AI over time. Before GPUs, the total resources dedicated to training top-performing models grew steadily at 1.5x per year; with modern GPUs, this <a href="https://epochai.org/data/notable-ai-models">growth accelerated</a> to 5x every year.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Vpq7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Vpq7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 424w, https://substackcdn.com/image/fetch/$s_!Vpq7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 848w, https://substackcdn.com/image/fetch/$s_!Vpq7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 1272w, https://substackcdn.com/image/fetch/$s_!Vpq7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Vpq7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png" width="1100" height="620" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:620,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Vpq7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 424w, https://substackcdn.com/image/fetch/$s_!Vpq7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 848w, https://substackcdn.com/image/fetch/$s_!Vpq7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 1272w, https://substackcdn.com/image/fetch/$s_!Vpq7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3409fd35-898d-4a57-8c67-34c8ee93f953_1100x620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><ol start="2"><li><p><strong>Algorithmic breakthroughs: </strong>In 2017, Google researchers <a href="https://arxiv.org/pdf/1706.03762">introduced</a> a &#8220;self-attention&#8221; mechanism, the underpinning of the transformer model.<sup>[4]</sup> This method enabled AI to analyze relationships in data <em>simultaneously</em> (rather than sequentially), capturing context and long-range dependencies. It also eliminated the need to feed data in a clear, structured order, as models could dynamically assign importance to each element based on its relevance to others, regardless of their sequence. (Thanks to ongoing algorithmic modifications, <a href="https://epochai.org/blog/algorithmic-progress-in-language-models">AI models are currently advancing at more than twice the pace of Moore&#8217;s Law</a>!)</p></li></ol><p>Together, these advancements have transformed how machines process data, eliminating the need for rigid pre-formatting or upfront modification.<sup>[5]</sup></p><p><strong>In other words, computers can now handle raw, unstructured information &#8211; words, images, video &#8211; at a higher level of abstraction.</strong></p><p>This is a familiar pattern in software. For example, transpilers abstracted manual <em>code translation</em> between languages,<sup>[6]</sup> and containers abstracted <em>environment configuration</em>, enabling applications to run seamlessly across different platforms.<sup>[7]</sup></p><p><strong>Now, this shift is happening with data.</strong></p><p>This represents a significant leap forward. It&#8217;s why generative AI felt so impressive &#8211; and human &#8211; when it first hit the consumer market.</p><p>And understanding this evolution also sheds light on AI&#8217;s shortcomings &#8211; where these models still <a href="https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and">underperform</a>, why some <a href="https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend,-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf">question</a> the <a href="https://www.bloomberg.com/news/articles/2024-11-01/tech-giants-are-set-to-spend-200-billion-this-year-chasing-ai">$200B annual spend</a> on model development &#8211; and the role startups can play to harness the raw power of these models for applications in the real economy.</p><div><hr></div><h3><strong>02 | </strong>Application layer ~ Reasoning layer</h3><p>As model capabilities advance, what role can application-layer architecture play? Can startup applications build a competitive edge against incumbents that integrate directly with these models and already have advantages in data and distribution?</p><p><strong>These are timely questions.</strong></p><p>Today&#8217;s models excel at <em>general knowledge</em>. Trained on massive, internet-scale datasets, they demonstrate impressive recall and inductive reasoning. Ask an LLM about any topic &#8211; quantum physics, 17th-century Tulip Mania, or Act IV of <em>Hamlet</em> &#8211; and it responds thoroughly, in seconds.</p><p><strong>However, these foundational models are still challenging to work with.</strong></p><p>Unaided, they <a href="https://arxiv.org/pdf/2410.05229">struggle</a> with planning, deductive reasoning, and abstraction. For instance, they falter when prompted with superfluous information, <a href="https://arxiv.org/pdf/2408.07215">multi-step problems</a>, or obscured subjects &#8211; tasks that require &#8220;thinking&#8221; beyond pattern-based predictions.</p><p>This inability to reason is especially problematic when applying AI to more complex fields like supply chain management, medicine, and law, domains where context and judgment are essential.</p><p><strong>And research indicates that simply scaling models with more data and compute is <a href="https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11">unlikely</a> to close this gap.</strong></p><p>While the latest mega models <em>appear</em> to handle complex tasks better, <a href="https://www.semanticscholar.org/reader/f531d1a681ed12fd582767133318d0728316a0ae">they still rely on pattern recognition</a> rather than true, principled reasoning. For example, OpenAI&#8217;s o1 model &#8211; a GPT-4 variant optimized for step-by-step problem solving &#8211; still struggles to respond well to complex or unsolvable inputs, demonstrating a <a href="https://www.arxiv.org/pdf/2409.13373">reliance on prediction patterns and approximate retrieval</a> rather than true understanding. Even future models, such as OpenAI's upcoming Orion, are <a href="https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows?rc=8tlb7g">not expected</a> to show substantial gains in this capability, especially when factoring in their massive development costs.</p><p><strong>But this &#8220;reasoning gap&#8221; is one that startups can exploit, particularly those building domain-specific applications.</strong></p><p>Until recently, many AI applications were thin &#8220;wrappers&#8221; around models, offering limited differentiation. Today, however, <em>post-training techniques</em> &#8211; often referred to as <a href="https://blog.langchain.dev/what-is-a-cognitive-architecture/">cognitive architecture</a> &#8211; can create a deeper intelligence layer within applications by integrating code, prompts, and model calls to transform user input into more precise actions and responses.</p><p>In fact, these enhancements can deliver gains equivalent to a <strong><a href="https://arxiv.org/pdf/2312.07413">5&#8211;30x increase in training compute</a></strong> &#8211; at a fraction of the cost.</p><p><strong>Equivalent Training Compute Required to Match Post-Training Enhancement Gains</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hlAO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hlAO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 424w, https://substackcdn.com/image/fetch/$s_!hlAO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 848w, https://substackcdn.com/image/fetch/$s_!hlAO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 1272w, https://substackcdn.com/image/fetch/$s_!hlAO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hlAO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png" width="990" height="515" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:515,&quot;width&quot;:990,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!hlAO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 424w, https://substackcdn.com/image/fetch/$s_!hlAO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 848w, https://substackcdn.com/image/fetch/$s_!hlAO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 1272w, https://substackcdn.com/image/fetch/$s_!hlAO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf452ec7-ce38-4231-8041-0d6da17910fe_990x515.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>This chart shows the improvement from several post-training enhancement techniques, measured by Compute-Equivalent Gain (GEC) &#8211; the increase in pre-training compute needed to match the enhancement's performance boost. <strong>Source: </strong><a href="https://arxiv.org/pdf/2312.07413">https://arxiv.org/pdf/2312.07413</a></h6><p></p><p><strong>These techniques can also drive differentiation and establish a near-term moat.</strong></p><p>Since post-training enhancements are most effective when tailored to specific industries, model-layer players in the race to build <em>general</em> capabilities are unlikely to compete directly. Non-AI-native incumbents would need to cannibalize their existing product infrastructure to keep up.</p><p>If designed well, products using these techniques can build a unique foundation, providing an edge over other startup competitors applying AI in similar markets but lacking the same technical depth and sophistication.</p><div><hr></div><h3><strong>03 | </strong>A framework for intelligent applications</h3><p>Below is a framework we&#8217;ve developed to decompose and the modern application layer. I&#8217;ve found it useful to reference when evaluating how new, AI-native applications can apply these foundational models to specific industries, and build an early, technical edge.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yDYp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yDYp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 424w, https://substackcdn.com/image/fetch/$s_!yDYp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 848w, https://substackcdn.com/image/fetch/$s_!yDYp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 1272w, https://substackcdn.com/image/fetch/$s_!yDYp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yDYp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png" width="1456" height="1115" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1115,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:929095,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158383185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yDYp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 424w, https://substackcdn.com/image/fetch/$s_!yDYp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 848w, https://substackcdn.com/image/fetch/$s_!yDYp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 1272w, https://substackcdn.com/image/fetch/$s_!yDYp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a754052-5239-4077-9d30-b59c658f4ea8_2017x1544.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Collectively, these components enable AI-native applications to overcome the reasoning and planning limitations of large, general models &#8211; and deliver deeper personalization, insight, and efficiency.</p><p>Collaborative, adaptive UX drives stickiness and repeat use. Advanced training methods, proprietary training data, and domain-specific knowledge graphs produce more nuanced outputs from unstructured data than what standalone models can achieve. Modular design and scalable pipelines enhance cost-efficiency and scalability.</p><p>Below are some of the aspects to focus on, though some may be more relevant than others, depending on the company, product vision, and market.</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EPJ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EPJ-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 424w, https://substackcdn.com/image/fetch/$s_!EPJ-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 848w, https://substackcdn.com/image/fetch/$s_!EPJ-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 1272w, https://substackcdn.com/image/fetch/$s_!EPJ-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EPJ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png" width="1456" height="3323" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3323,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1950707,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158383185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EPJ-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 424w, https://substackcdn.com/image/fetch/$s_!EPJ-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 848w, https://substackcdn.com/image/fetch/$s_!EPJ-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 1272w, https://substackcdn.com/image/fetch/$s_!EPJ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc80c03c-de21-436f-b624-cbac53c4b8a4_2957x6748.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These technical elements alone aren&#8217;t enough to establish long-term defensibility. However, they can unlock a material performance edge that &#8211; when paired with a thoughtful product roadmap, distribution, and GTM strategy &#8211; can catalyze a growth flywheel, data accumulation, and distribution advantages that ladder up to a more durable moat over time.</p><div><hr></div><p><strong>Now is a special moment in the evolution of AI.</strong></p><p>We&#8217;re seeing the limits of investing in model scaling and entering a moment when durable, specialized AI-native applications can emerge. With a clearer understanding of the potential &#8211; and limits &#8211; of foundational models, we can now better pinpoint where application-layer software can bridge gaps and deliver machine intelligence to real-world use cases.</p><p>At Timespan, our ambition is to be a thoughtful, committed partner to the protagonists of this story &#8211; the founders in the earliest stages of building a modern stack, solving industry problems with unstructured data in creative ways, and moving boldly and expediently to wow customers and navigate to product-market fit.</p><p></p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p><strong>[1]</strong> <em>Unstructured data</em> refers to information that doesn&#8217;t have a predefined format or organization, making it harder to analyze and interpret compared to structured data, which is organized into a format like rows and columns that computers can more easily read. Unstructured data includes things like text documents, emails, social media posts, images, audio, video files, and sensor data, which vary widely in format and content.</p><p><strong>[2]</strong> <em>Parallel processing</em> splits a task into independent units that run simultaneously on multiple processing cores, allowing faster task completion. For example, in image analysis, each core might analyze a different part of an image at the same time, then combine the results for a full analysis. In contrast, <em>sequential processing</em> handles each part of the image one after another, using a single core to process each section in order. This sequential approach is slower, as it waits for each step to finish before moving to the next.</p><p><strong>[3]</strong> A <em>core</em> is a basic processing unit within a computer's CPU or GPU, capable of executing instructions or performing calculations. Each core can handle its own task independently. In CPUs, cores are typically powerful but limited in number, making them well-suited for sequential tasks. GPUs, however, have thousands of smaller, more specialized cores that excel in parallel processing, allowing them to handle large volumes of data concurrently, which is ideal for tasks like graphics rendering or AI computations. NVIDIA&#8217;s early growth as a gaming chip company &#8211; before becoming a leader in AI &#8211; is a great reminder that technology does not track to pre-existing sectors or categories!</p><p><strong>[4] </strong>With <em>self-attention</em>, each input to the model (like a word) is tokenized and represented by three components: a <em>query</em> (what it&#8217;s looking for in other words), a <em>key</em> (what it offers to others), and a <em>value</em> (its actual content). The model uses these queries and keys to calculate <em>attention scores</em> that quantify the relationships between tokens. By focusing on these connections, the model can understand context across the whole input, capturing all of these relationships in parallel. This mechanism not only boosts the predictive power of the model, but also its reasoning capability. By capturing long-range relationships and focusing on the most relevant data points, self-attention allows the model to draw from the full context, enabling it to find complex patterns and produce structured, context-aware outputs and reason through tasks (like answering questions or generating summaries) with greater accuracy and logical consistency. This mechanism is called &#8220;self-attention&#8221; because the model is only attending to parts of the input itself, rather than external data, allowing each word to "attend" to relevant words around it.</p><p><strong>[5] </strong>This chart estimates the contributions of scaling and algorithmic innovation in terms of the raw compute that would be naively needed to achieve a state-of-the-art level of performance. The contribution of algorithmic progress is roughly half as much as that of compute scaling. <strong>Source: </strong><a href="https://epochai.org/blog/algorithmic-progress-in-language-models">https://epochai.org/blog/algorithmic-progress-in-language-models</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zpiq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zpiq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 424w, https://substackcdn.com/image/fetch/$s_!zpiq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 848w, https://substackcdn.com/image/fetch/$s_!zpiq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 1272w, https://substackcdn.com/image/fetch/$s_!zpiq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zpiq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png" width="1168" height="696" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:696,&quot;width&quot;:1168,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!zpiq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 424w, https://substackcdn.com/image/fetch/$s_!zpiq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 848w, https://substackcdn.com/image/fetch/$s_!zpiq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 1272w, https://substackcdn.com/image/fetch/$s_!zpiq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02b4cd1a-d1d4-422d-a9c6-5b2d098eaa65_1168x696.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>[6]</strong> A <em>transpiler</em>, also known as a <em>source-to-source compiler</em>, is a tool that converts code written in one programming language into equivalent code in another language, typically at a similar abstraction level. Unlike a traditional compiler, which often translates high-level language code to lower-level machine code or bytecode, a transpiler converts code from one high-level language to another. This process enables developers to write code in one language but leverage the features and compatibility of another.</p><p><strong>[7] </strong>A <em>container</em> is a self-contained unit of software that includes everything an application needs to run &#8211; its code, libraries, and system tools &#8211; so it works the same way in different environments. Unlike full virtual machines, containers share the host operating system, which makes them lightweight and efficient. This isolation means that each container can run independently on the same system, allowing developers to easily deploy and scale applications consistently across servers.</p>]]></content:encoded></item><item><title><![CDATA[Will Coding Automation Create a New Paradigm for Open Source?]]></title><description><![CDATA[How coding automation could reshape the future of open source software.]]></description><link>https://www.thetimes.blog/p/will-coding-automation-create-a-new</link><guid isPermaLink="false">https://www.thetimes.blog/p/will-coding-automation-create-a-new</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Mon, 16 Sep 2024 02:15:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1pdG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>01 | </strong>Code as a commodity</h3><p>The way we build technology is undergoing a profound shift. New development standards and advancements in coding automation are making software development significantly faster and more accessible.</p><p><strong>Code, once the most proprietary and costly component of the software stack, is rapidly becoming a commodity.</strong></p><p>I&#8217;ve been reflecting on where this will most affect our industry, and I believe open source software will be at the center of this transformation.</p><p><strong>I predict that our conception of open source &#8211; how it is used, who uses it, how it gets integrated into products &#8211; might look fundamentally different than in any prior era of technology.</strong></p><div><hr></div><h3><strong>02 | </strong>The pros and cons of open source</h3><p>Open source software (OSS) is software where the source code is freely available for anyone to inspect, use, and modify for their own use case. It offers several benefits:</p><ul><li><p>Integrating OSS into in-house applications allows for deeper <strong>customization</strong> and <strong>modularity</strong>, compared to buying all-in-one, proprietary software, because the underlying code can be directly copied and edited to meet specific business needs.</p></li></ul><ul><li><p>A large <strong>community of contributors</strong> leads to rapid improvements, organic adoption, robust tooling, and (oftentimes) enhanced security, as large networks of developers identify and patch vulnerabilities.</p></li></ul><ul><li><p>OSS is often more <strong>cost-effective</strong>, allowing firms to circumvent all-in-one licensing fees and vendor lock-in. <a href="https://project.linuxfoundation.org/hubfs/LF%20Research/Measuring%20the%20Economic%20Value%20of%20Open%20Source%20-%20Report.pdf?hsLang=en">97% of firms</a> report net cost savings from using OSS, and without it, companies would need to spend <a href="https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf">3.5x more</a> on software.</p></li></ul><p><strong>However, OSS has historically been a tough sell for investors.</strong></p><p>Generating revenue and building a competitive edge are challenging when the core product (the codebase) is free and accessible. Only a few open source companies &#8211; like <a href="https://www.mongodb.com/">MongoDB</a> and <a href="https://www.databricks.com/">Databricks</a> &#8211; have surpassed $1B in revenue, with business models relying on peripheral services like support, hosting, or premium security.</p><p>Additionally, OSS implementation requires technical expertise, which limits broader adoption. Closed source alternatives &#8211; with their plug-and-play functionality, user-friendly APIs, and dedicated support &#8211; appeal to buyers seeking ease of use.</p><p><strong>As a result, open source companies receive less than 5% of global software spend<sup>[1]</sup> and <a href="https://docsend.com/view/zgbwtzgj72nvzszq">under 5% of U.S. VC investment</a>.</strong></p><div><hr></div><h3><strong>03 | </strong>A spike in open source</h3><p>In the past several years, open source projects have gained ground, especially vis-a-vis closed-source alternatives.</p><p>For example, open source data management systems are now as preferred as proprietary ones, a sharp contrast from a decade ago when proprietary solutions were nearly twice as popular.<sup>[2]</sup></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1pdG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1pdG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 424w, https://substackcdn.com/image/fetch/$s_!1pdG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 848w, https://substackcdn.com/image/fetch/$s_!1pdG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 1272w, https://substackcdn.com/image/fetch/$s_!1pdG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1pdG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png" width="1224" height="540" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:540,&quot;width&quot;:1224,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80454,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158382298?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1pdG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 424w, https://substackcdn.com/image/fetch/$s_!1pdG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 848w, https://substackcdn.com/image/fetch/$s_!1pdG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 1272w, https://substackcdn.com/image/fetch/$s_!1pdG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36bb293b-4633-4343-b590-6c321d3d60dd_1224x540.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.openlogic.com/resources/2023-state-open-source-report">Two-thirds of companies</a> increased their use of OSS last year, particularly in <strong>(a)</strong> machine learning, where frameworks like TensorFlow and PyTorch <a href="https://medium.com/@navarai/tensorflow-vs-pytorch-a-comprehensive-comparison-for-2024-b9df6bbc5933">dominate</a>, and <strong>(b)</strong> data processing, where companies like Netflix and Uber find Apache <a href="https://netflixtechblog.com/hadoop-platform-as-a-service-in-the-cloud-c23f35f965e7">Hadoop</a> and <a href="https://www.uber.com/blog/uscs-apache-spark/">Spark</a> now suitable for enterprise-scale workflows.</p><p>Moreover, open source's highly-engaged communities and large user bases are often driving faster, more efficient revenue growth than closed-source software, which usually relies on traditional sales and marketing.<sup>[3]</sup></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QPk0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QPk0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 424w, https://substackcdn.com/image/fetch/$s_!QPk0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 848w, https://substackcdn.com/image/fetch/$s_!QPk0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 1272w, https://substackcdn.com/image/fetch/$s_!QPk0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QPk0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png" width="1056" height="594" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:594,&quot;width&quot;:1056,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:174829,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158382298?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QPk0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 424w, https://substackcdn.com/image/fetch/$s_!QPk0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 848w, https://substackcdn.com/image/fetch/$s_!QPk0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 1272w, https://substackcdn.com/image/fetch/$s_!QPk0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e283431-14d7-4487-89b5-a187cb0d8a3b_1056x594.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What&#8217;s driving this spike?</p><p><strong>This rise in open source adoption and commercialization is closely tied to the standardization of web development.</strong></p><p>I&#8217;ve written <a href="https://www.thetimes.blog/post/the-new-vertical-saas-playbook">before</a> how frameworks like React / Next make front-end development easier. In addition, standardized tools like Docker and Kubernetes, RESTful APIs, and micro-services<sup>[4]</sup> have made integrating and customizing open source software significantly faster and more efficient.</p><p><strong>As coding automation continues to advance, we may be entering a new era, one where open source is orders of magnitude more powerful and accessible.</strong></p><div><hr></div><h3><strong>04 | </strong>Will coding automation create a new paradigm for open source?</h3><p>AI-driven coding automation is poised to revolutionize software development.</p><p>Programming assistants like <a href="https://github.com/features/copilot">GitHub Copilot</a> are automating code generation, review, and deployment. (Already, Copilot estimates <a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">55% time savings</a> on task completion for developers.)</p><p>Like past technology cycles, where digital distribution and data storage costs fell to near-zero, the structural barriers that have always limited open source adoption &#8211; the costs of creating, customizing, debugging, and deploying code &#8211; are rapidly declining.</p><p><strong>As this trend accelerates, software&#8217;s value proposition may shift in favor of open source. </strong>If managing OSS is more <em>cost-effective</em>, more <em>customizable</em>, and now <em>simpler</em> than closed-source solutions, it may well become the dominant model for software development. This may be especially true as software budgets get strained, buyers get more discerning around ROI, and <a href="https://www.vendr.com/insights/saas-trends-report-q1-2024">sales cycles extend</a>.</p><p>Here are key areas where AI tooling could reshape open source:</p><ul><li><p><strong>Simplified integration</strong>: Coding assistants &#8211; <a href="https://codesignal.com/report-developers-and-ai-coding-assistant-trends/">already used by half of developers on a daily basis</a> &#8211; will simplify OSS integration, allowing smaller teams to deploy and customize code much more quickly and with fewer errors.</p></li></ul><ul><li><p><strong>Wider access: </strong>Natural language processing will allow non-technical users to manipulate open source code using plain language. This opens OSS to a much wider audience, rapidly expanding the market for software development.</p></li></ul><ul><li><p><strong>Automated maintenance: </strong>AI will streamline the maintenance and security of OSS in live environments by automating updates, proactively resolving bugs, and eliminating the need for manual oversight. This will make OSS even more reliable and secure, addressing concerns that have historically slowed its adoption in enterprise settings.</p></li></ul><ul><li><p><strong>New business models: </strong>AI-enabled customization, security, and performance monitoring could create new, recurring revenue streams for OSS companies &#8211;<em> and</em> a more scalable cost structure for providing those services. For example, open source provider <a href="https://www.elastic.co/observability/aiops">Elastic</a> began charging for AI-powered features like performance monitoring and security in its Elastic Cloud business, boosting that segment&#8217;s revenue by 29% year-over-year and growing its share of total revenue to 43% in 2024, up from 35% in 2022.<sup>[5]</sup></p></li></ul><ul><li><p><strong>Faster R&amp;D: </strong>Open source relies on global networks of developers to maintain code. However, collaboration at a large scale often faces bottlenecks, such as problems with code merging, conflict resolution, and quality control. AI tools are providing automated solutions for these issues, enabling open source to innovate faster than closed source in terms of quality and deployment speed.</p></li></ul><p>AI is reducing the cost of code generation to near-zero, breaking down the long-standing barriers to OSS integration and monetization.</p><p><strong>In this new paradigm, the fundamental value in software could shift &#8211; from code itself, to the unique ways it is shaped and customized directly by the end user.</strong></p><p>As a result, open source may soon have an enduring competitive edge over proprietary software.</p><p></p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p>[1] In 2022, OSS spend was estimated at $25B (source) and the overall software market was estimated at $583.5B (source).</p><p>[2] Source: https://db-engines.com/en/ranking_osvsc. The DB-Engines Ranking measures popularity by combining factors like web mentions, search trends, technical discussions, job postings, professional profiles, and social media activity. These metrics are standardized and averaged to create a relative popularity score for each database system.</p><p>[3] Source: https://www.bvp.com/atlas/roadmap-open-source.</p><p>[4] A microservice is a software architecture where an application is built from small, independent services, each handling a specific function. Unlike monolithic systems, microservices can be developed, deployed, and scaled separately, offering greater flexibility, modularity, and easier maintenance.</p><p>[5] Elastic N.V. (2024). Q4 2024 shareholder letter. Source. Pages 21, 59.</p>]]></content:encoded></item><item><title><![CDATA[A Small Win for Open-Source AI]]></title><description><![CDATA[Key insights from a recent report on the risks and benefits of open versus closed AI models.]]></description><link>https://www.thetimes.blog/p/a-small-win-for-open-source-ai</link><guid isPermaLink="false">https://www.thetimes.blog/p/a-small-win-for-open-source-ai</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Wed, 31 Jul 2024 18:39:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F8Lk!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F759a8de3-33dd-4676-ad92-0e298c62f56c_636x636.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>01 |</strong> A small win for open-source AI</h3><p>There&#8217;s an <a href="https://www.nytimes.com/2024/05/29/technology/what-to-know-open-closed-software.html">ongoing debate</a> in Silicon Valley around open versus closed AI models.</p><p>Yesterday, the National Telecommunications and Information Administration (NTIA), an agency within the Department of Commerce, released a <a href="https://www.ntia.gov/issues/artificial-intelligence/open-model-weights-report">report</a> weighing the benefits and risks of open models. This work was mandated by President Biden's October 2023 <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">Executive Order</a> on &#8220;Safe, Secure, and Trustworthy AI.&#8221;</p><p><strong>TL;DR: it&#8217;s too early to propose any restrictions on open models.</strong></p><p>This is a small, somewhat unexpected victory for open-source advocates.</p><p>(Although &#8220;too early&#8221; is the operative phrase that best summarizes NTIA&#8217;s conclusions. This was more of a &#8220;let&#8217;s punt and continue to evaluate&#8221; than anything else.)</p><p>Beyond the final recommendation, the report does offer some early insight into an important question &#8211; how as a society should we think about open access to this powerful new technology?</p><div><hr></div><h3><strong>02 |</strong> Definitions and methodology</h3><p><strong>The report defines &#8220;open models&#8221; as AI systems with model weights<sup>[1]</sup> that are available to the public.</strong></p><p><em>Model weights</em> are numerical values that determine a model's output based on a given input. They act as the AI&#8217;s &#8220;blueprint,&#8221; allowing the underlying model to be used, replicated, integrated directly into an application, and adjusted for various use cases. Examples of open models include <a href="https://llama.meta.com/">LLaMA</a> (by Meta) and <a href="https://mistral.ai/">Mistral</a>.</p><p>In contrast, &#8220;closed&#8221; models, like the most recent ones provided by OpenAI, keep these weights private. Closed models only allow users to submit inputs and receive outputs via web interfaces or APIs.</p><p>Open models allow for easier customization and access. They enable local data processing, which enhances privacy since data does not have to be sent to the model developer. Open models are generally cheaper to run than closed models for similar use cases.</p><p>However, because their weights are openly available, they pose challenges in monitoring and preventing misuse by malicious actors. This includes the potential development of weapons (bio, nuclear, cyber), illegal content, and dis/misinformation.</p><p>This report specifically examined:</p><p><strong>(a) large, dual-use foundational models</strong> (excluding smaller models below 10 billion parameters), and</p><p><strong>(b) the </strong><em><strong>marginal</strong></em><strong> impact of open models</strong>, i.e., their risks and benefits <em>beyond </em>those of closed models and other pre-existing technologies.</p><div><hr></div><h3><strong>03 |</strong> Key takeaways</h3><p>The report concludes that the benefits of open models&#8212;developer contributions, customization, data privacy, democratized access, and competition that drives down costs&#8212;currently outweigh the marginal risks of misuse.</p><p>But I found the fine print most interesting:</p><ul><li><p>Today, closed models typically have a performance advantage. However, open models are only a few months behind, and the time to performance parity is shrinking. Footnote 32 on page 53 of the report has some great insight from industry operators on this point.<sup>[2]</sup></p></li></ul><ul><li><p>The risks of open models, such as malicious actors fine-tuning or removing safety features for harmful purposes, are offset by the benefits of developer collaboration to identify vulnerabilities and implement safety measures. In instances where models are used inappropriately, the transparency around weights and source code should create better conditions for auditing, accountability, and response time.</p></li></ul><ul><li><p>For risks related to societal wellbeing (i.e., open models being used to generate illegal sexual material, dis/misinformation, or discriminatory outputs), the negative impact may come more from existing technology, rather than direct, public access to model weights. For instance, open models might make it easier to create &#8220;deep fake&#8221; content. But focusing regulation and resources on <em>controlling the spread</em> of these &#8220;deep fakes&#8221; through existing distribution channels like social networks may yield a better ROI. Instead of focusing solely on what started the fire, focus on the conditions that allowed it to spread.</p></li></ul><ul><li><p>Open models may not substantially exacerbate misuse, as closed models are <em>also</em> prone to manipulation (albeit to a lesser extent). For example, the non-consensual, AI-generated intimate images of Taylor Swift that spread across the internet in early 2024 were made using a closed model. Similarly, OpenAI recently reported that malicious nation-state-affiliated actors were using ChatGPT, a closed model, for their cyber operations.</p></li></ul><ul><li><p>Regulating open models is unlikely to change the trend towards an oligopoly of a few dominant foundational models. High barriers to compute access, capital, and talent availability will have a greater influence on these dynamics.</p></li></ul><ul><li><p>Having a few open models among the preferred set of large providers should foster healthy competition further up the AI supply chain, particularly at the tooling and application layers. Open models are <a href="https://arxiv.org/pdf/2403.07918">easier to customize</a>, which should lead to a more robust ecosystem of specialized applications and reduce the systemic risk of over-reliance on a single system (we all experienced the risks of over-reliance during the recent <a href="https://www.govtech.com/security/crowdstrike-outage-showed-the-power-of-a-single-failure">CrowdStrike failure</a>).</p></li></ul><p>At this juncture, it&#8217;s far too early to declare whether open versus closed will &#8220;win,&#8221; or which is ultimately &#8220;better.&#8221; And perhaps that is the wrong debate to have in the first place. Both have risks and benefits. Great companies will be built on either &#8211; or, more likely, both.</p><p>What is important is that policymakers continue to monitor the risks and benefits, and respond appropriately to protect people and allow for healthy market dynamics as this technology evolves.</p><p>As for my own investment thesis:</p><ol><li><p>I don&#8217;t have a fixed point of view whether a startup should build on a closed- vs open model. I care a lot more about whether there&#8217;s a clear narrative about <em>why</em> an individual model (or stitching of multiple models) is the right fit for a particular product based on pricing, utility, and developer functionality.</p></li><li><p>If closed models are unlikely to win long-term based solely on a performance edge, I expect they will invest more in commercial integrations, developer tooling, and network activity on top of their core models. We&#8217;re seeing it already with OpenAI&#8217;s launch of <a href="https://www.theverge.com/2024/7/25/24205701/openai-searchgpt-ai-search-engine-google-perplexity-rival">SearchGPT</a>. Going after broad use cases (e.g., AI-powered personal assistants or search) is a very risk endeavor for startups. Closed models will throw virtually unlimited resources at these obviously large markets as they grow and try to differentiate beyond sheer performance.</p></li></ol><p>Philosophically, I believe open models are an important part of the ecosystem. They foster healthy competition, developer optionality, and flexibility. They also offer cost efficiencies, which are important in this early stage of technology development.</p><p>If the marginal risk to society is not material, let the market dictate how and where open and closed models get used.</p><p>I&#8217;m glad that&#8217;s where the NTIA landed &#8211; at least for now.</p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p><strong>[1]</strong> A quick primer on model weights from this report (page 8): &#8220;An AI model processes an input&#8212;such as a user prompt &#8212; into a corresponding output, and the contents of that output are determined by a series of numerical parameters that make up the model, known as the <em><strong>model&#8217;s weights</strong></em>. The values of these weights, and therefore the behavior of the model, are determined by training the model with numerous examples. The weights represent numerical values that the model has learned during training to achieve an objective specified by the developers. Parameters encode what a model has learned during the training phase, but they are not the only important component of an AI model. For example, foundation models are trained on great quantities of data; for large language models (LLMs) in particular, training data can be further decomposed into trillions of sub-units, called tokens. Other factors also play a significant role in model performance, such as the model&#8217;s architecture, training procedures, the types of data (or modalities) processed by the model, and the complexity of the tasks the model is trained to perform.&#8221;</p><p><strong>[2]</strong> A few notable quotes:</p><p><em>Center for AI Policy</em>: &#8220;We find that the timeframe between closed and open models right now is <strong>around 1.5 years</strong>. We can arrive at this conclusion by analyzing benchmark performance between current leading open weight AI models and the best closed source AI models.&#8221;</p><p><em><a href="http://unlearn.ai/">Unlearn.ai</a></em>: &#8220;At the moment, it takes about<strong> 6 months to 1 year</strong> for similarly performing open models to be successfully deployed after the deployment of OpenAI&#8217;s closed models. The time gap between proprietary image recognition models and high-quality open-source alternatives <strong>has narrowed</strong> relatively quickly due to robust community engagement and significant public interest. In contrast, more niche or complex applications, such as those requiring extensive domain- specific knowledge or data, might see longer timeframes before competitive open models emerge.&#8221;</p><p><em>Databricks: </em>&#8220;Databricks believes that major open source model developers are <strong>not far behind</strong> the closed model developers in creating equally high performance models, and that <strong>the gap between the respective development cycles may be closing</strong>.&#8221;</p><p><em>Meta</em>: &#8220;It is <strong>not possible to generally estimate this timeframe</strong> given the variables involved, including the model deployment developers&#8217; business models and whether, in the case of Llama 2, they download the model weights from Meta directly or accessed it through third-party services like Azure or AWS.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[The Next Chapter of Consumer Marketplaces]]></title><description><![CDATA[Emerging trends in consumer marketplaces and what the future of the category might look like.]]></description><link>https://www.thetimes.blog/p/the-next-chapter-of-consumer-marketplaces</link><guid isPermaLink="false">https://www.thetimes.blog/p/the-next-chapter-of-consumer-marketplaces</guid><dc:creator><![CDATA[Evan O'Donnell]]></dc:creator><pubDate>Mon, 29 Jul 2024 16:23:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!k39F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>01 |</strong> A brief history of consumer marketplaces</h3><p>Consumer marketplaces are digital platforms that facilitate transactions between individual customers and various sellers, such as peers, freelancers, and businesses. These companies have been among the most consequential in venture capital. Examples include Amazon, Uber, Airbnb, and Etsy.</p><p><em>It&#8217;s a model almost perfectly designed for the connective tissue of the internet</em> &#8211; linking diverse users across the globe into a singular hub of commercial activity.</p><p>Successful consumer marketplaces do four things remarkably well:</p><ol><li><p><strong>Define a novel, recurring experience. </strong>Amazon created the first destination for purchasing everyday goods (starting with books) through the web browser.</p></li><li><p><strong>Drive better unit economics and scalable growth through new technology.</strong> Uber used GPS, mobile distribution, and real-time data processing to achieve economies of scale, on-demand delivery, and a dramatically better cost structure.</p></li><li><p><strong>Harness network effects to create high switching costs. </strong>GoodRx&#8217;s extensive pharmacy network, discounts, and integrated telehealth services made it difficult for consumers to find comparable savings and convenience elsewhere.</p></li><li><p><strong>Run a creative go-to-market strategy to crack (pun intended!) the chicken-egg problem</strong> (where users on one side of the platform only find it useful if the other side is already active). Airbnb did this by <a href="https://hackernoon.com/how-airbnb-hacked-craigslist-for-viral-growth-24l35eg">scraping Craigslist</a> to automate outreach to its early cohort of hosts, then thoughtfully curating its initial listings to attract renters.</p></li></ol><p><strong>Timing is key to evaluating new consumer marketplaces (although often overlooked).</strong> That's because, historically, these models grow in tandem with technological shifts that unlock new types of commercial activity.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!k39F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!k39F!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 424w, https://substackcdn.com/image/fetch/$s_!k39F!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 848w, https://substackcdn.com/image/fetch/$s_!k39F!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 1272w, https://substackcdn.com/image/fetch/$s_!k39F!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!k39F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png" width="1456" height="826" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a4833882-552a-444b-87ad-4e555519820b_3852x2186.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:826,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:373694,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thetimesblog.substack.com/i/158380407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!k39F!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 424w, https://substackcdn.com/image/fetch/$s_!k39F!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 848w, https://substackcdn.com/image/fetch/$s_!k39F!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 1272w, https://substackcdn.com/image/fetch/$s_!k39F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4833882-552a-444b-87ad-4e555519820b_3852x2186.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>02 |</strong> Underlying patterns in consumer marketplaces</h3><p>When examining the evolution of consumer marketplaces, several patterns emerge.</p><p><strong>// As automation improves, digital marketplaces are both (a) specializing and (b) handling more complex transactions.</strong> Early platforms like eBay managed relatively simple transactions for second-hand, commodity goods. Now, with better data processing and advanced search capabilities, platforms are building products that can match heterogeneous supply (in Fiverr&#8217;s case, freelancers with diverse skills) to more complex projects. This trend will accelerate as advancements in AI and improvements in data processing drive even deeper automation.</p><p><strong>// Strong network effects lead to industry concentration. </strong>In the last several years, 2-3 companies have consistently accounted for 40-60% of total annual GMV among the top 100 private consumer marketplaces.<sup>[1]</sup> Public companies reflect a similar trend &#8211; Amazon, for example, commands nearly <a href="https://www.emarketer.com/content/amazon-accounted-40-of-ecommerce-sales-4-of-retail-sales-2023">40% of e-commerce market share</a>.</p><p>For entrepreneurs, this dynamic highlights (i) the importance of timing, to capitalize on new technology shifts and secure a first-mover advantage, and (ii) the need for a clear product thesis that accounts for precisely how this moment in technology can service unmet needs for buyers and sellers.</p><p><strong>// Data ownership and customization are increasingly important for suppliers.</strong> SaaS platforms like Shopify and social networks like Instagram are unbundling the traditional marketplace model. Instead of having a central intermediary trafficking transaction flows, sellers are now building custom storefronts and distributing them directly to various devices and social networks in order to retain more control over their data and customer experience. To compete, new marketplaces must also address these needs. This can be achieved by using decentralized infrastructure and/or better self-custodial tools.</p><p><strong>// Historically, the most transformative marketplaces build new types of network endpoints. These endpoints serve as the essential building blocks that give rise to entirely new product categories.</strong></p><p>A network endpoint acts as a gateway that ingests and processes data into a marketplace network. Examples include distribution channels like web browsers, IoT devices, mobile phones, and APIs that pull and integrate data from external sources. These new types of endpoints introduce unique data assets into the product experience, which then unlock entirely new categories of commerce.</p><p>For example, Uber tapped into a new type of endpoint (mobile distribution) to feed a new type of data asset (real-time location data) into its product, which was essential for creating a new category (on-demand rides). Similarly, GoodRx built its own <a href="https://www.goodrx.com/developer/documentation">proprietary API</a> to feed a new type of data asset (real-time prescription drug pricing mapped against insurance plan formularies<sup>[2]</sup>) into its product. This created an entirely new market around prescription savings and price comparison.</p><p>I expect these trends to continue in this next wave of technology development.</p><div><hr></div><h3><strong>03 | </strong>New trends in consumer marketplaces</h3><p>Recently, I have been spending time with companies that are in the early innings of building new consumer marketplaces.</p><p>A few years ago, this space felt stale &#8211; too many "<em>Uber for [inset niche, existing product here].</em>"</p><p>But today feels different. Founders are in experimentation mode, testing use cases and exploring the potential for new technology to open up entirely new commercial categories.</p><p>One trend I'm digging into is <strong>portable identity management</strong>. Startups are developing systems for longitudinal digital identities, which integrate seamlessly across different marketplaces. Imagine Craigslist 2.0, where your verified information, preferences, and transaction history travel with you into different niche marketplace verticals. This makes interactions smoother and more personalized, and means fewer fraud issues and enhanced trust between buyers and sellers.</p><p>Another trend is around harnessing the power of artificial intelligence to create <strong>new economies around very complex workflows</strong>. Most existing marketplaces focus on one-to-one matching. But AI can unlock more complex, multi-party coordination. Picture, for example, a personal finance management marketplace that unlocks access for a whole new customer segment that is unwilling to pay traditional advisor fees. An AI agent could organize advisors, tax consultants, investment products, insurance agents, and estate planners. The AI handles administrative tasks and sequencing, ensuring each step is assigned to the right professional at the right time. This frees up time for professionals to focus on their unique skills, take on significantly more clients (and compensation) for their time, and creates greater accountability and transparency for the end buyer.</p><p>One last trend is <strong>creator augmentation</strong>. In the last several years, brands have increasingly looked to influencers to market their products. In 2016, $1.7B was spent on influencer marketing. That figure reached $21.1B in 2023 (a 43% CAGR).<sup>[3]</sup> As this market continues to grow and large foundational models improve, it will allow these influencers to capture more rent from these arrangements and scale their likeness, tone, and personalized recommendations across the internet. Imagine a creator-led marketplace with:</p><ul><li><p>WYSIWYG tooling,<sup>[4]</sup> so creators can easily build custom, dynamic storefronts that are unique to their brand and voice</p></li><li><p>a personalized dashboard, powered by models fine-tuned with creator data. This would enable creators to communicate at scale across multiple channels and give them insight into what is motivating their fan base</p></li><li><p>bespoke product recommendations that align not just with the creator&#8217;s brand, but also with the specific follower and their specific relationship to that creator</p></li><li><p>full creator control over their tooling and data, to build trust, increase the likelihood of adoption, and provide an authentic user experience.</p></li></ul><p>If Sam Altman envisions a world where a <a href="https://fortune.com/2024/02/04/sam-altman-one-person-unicorn-silicon-valley-founder-myth/">one-person team can build a unicorn company</a>, perhaps a more immediate opportunity is for individual creators to build and manage their own mini-marketplaces, all on a singular back-end network.</p><p>Conceptually, none of these trends feel entirely new. Many have been in the entrepreneurial imagination for some time. But with recent technology advances, we&#8217;re finally seeing products materialize and deliver tangible functionality.</p><p>If you are building along any of these themes, or have a different take on what the next, landmark consumer marketplace will look like, I&#8217;d love to connect.</p><div><hr></div><div><hr></div><p><strong>Endnotes:</strong></p><p><strong>[1]</strong> Sources: <a href="https://a16z.com/marketplace-100/2022-edition/">2022</a>, <a href="https://a16z.com/marketplace-100/2021-edition/">2021</a>, <a href="https://a16z.com/marketplace-100/2020-edition/">2020</a> a16z Marketplace 100 reports. Instacart represented 64.2% of private marketplace GMV in 2022, 71.5% in 2021, and Airbnb, DoorDash, Instacart, and Postmates collectively represented 76% of private marketplace GMV in 2020. (Shout out to Bennett Carroccio, co-founder of <a href="https://www.shopcanal.com/">Canal</a> and an entrepreneur I have partnered with, who first ran this marketplace analysis in 2020.)</p><p><strong>[2] </strong>A formulary is a list of prescription medications approved for use and covered by a particular health insurance plan or provided by a healthcare provider.</p><p><strong>[3]</strong> <a href="https://influencermarketinghub.com/influencer-marketing-benchmark-report/">Source</a>.</p><p><strong>[4] </strong><em>WYSIWYG</em> stands for <em>"what you see is what you get."</em> It refers to a software interface that allows users to see what the end result will look like while the document or content is being created. This is commonly used in the context of text editors, website builders, and content management systems where the user can format text, insert images, and make other changes in a visual editor that directly reflects how the content will appear when published.</p>]]></content:encoded></item></channel></rss>