<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Vik's Newsletter]]></title><description><![CDATA[AI infrastructure research across photonics, memory, interconnects, power, and packaging. Engineering depth translated for professionals and investors.]]></description><link>https://www.viksnewsletter.com</link><generator>Substack</generator><lastBuildDate>Wed, 29 Apr 2026 22:59:13 GMT</lastBuildDate><atom:link href="https://www.viksnewsletter.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Vikram Sekar]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[viksnewsletter@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[viksnewsletter@substack.com]]></itunes:email><itunes:name><![CDATA[Vikram Sekar]]></itunes:name></itunes:owner><itunes:author><![CDATA[Vikram Sekar]]></itunes:author><googleplay:owner><![CDATA[viksnewsletter@substack.com]]></googleplay:owner><googleplay:email><![CDATA[viksnewsletter@substack.com]]></googleplay:email><googleplay:author><![CDATA[Vikram Sekar]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Personal Update]]></title><description><![CDATA[New beginnings.]]></description><link>https://www.viksnewsletter.com/p/personal-update-2026</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/personal-update-2026</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Wed, 29 Apr 2026 10:47:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0XGn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post is a personal update, and is not on brand with the regular deep technical content on this newsletter. If this does not interest you, please skip this post. I am writing this from <a href="https://en.wikipedia.org/wiki/Pondicherry">Pondicherry</a>, a coastal town in southern India with a rich French influence from the colonization years. I&#8217;m minutes away from warm beaches and I&#8217;d like to get back to my vacation quickly, but also pen down some thoughts from my big life change. So let&#8217;s do this.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0XGn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0XGn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 424w, https://substackcdn.com/image/fetch/$s_!0XGn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 848w, https://substackcdn.com/image/fetch/$s_!0XGn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 1272w, https://substackcdn.com/image/fetch/$s_!0XGn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0XGn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e4b63a6f-b09f-450c-a73e-8e464b24524b.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1303530,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/195734253?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0XGn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 424w, https://substackcdn.com/image/fetch/$s_!0XGn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 848w, https://substackcdn.com/image/fetch/$s_!0XGn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 1272w, https://substackcdn.com/image/fetch/$s_!0XGn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4b63a6f-b09f-450c-a73e-8e464b24524b.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Sunrise on the beach at Pondicherry, where I will be hiding away for the next few days. Source: My phone camera.</figcaption></figure></div><p>Without much ado, here&#8217;s my &#8220;big update.&#8221; After 15 years of working in professional semiconductor companies, I&#8217;ve decided to strike out on my own. My focus will be around researching and writing about semiconductors on Substack, building out the <a href="https://www.youtube.com/@SemiDoped">Semi Doped podcast</a>, and expanding into consulting. That&#8217;s the grand plan at least.</p><p>Why the big change?</p><p>Having worked at small, medium, and large companies, I&#8217;ve realized that there is so much more to the world of semiconductors than I&#8217;m privy to, compared to my daily duties on the job. I started this newsletter in 2024 to explore my curiosity and find out what else is out there; to give myself an opportunity to learn and explore technology on my own, without concern for whether it results in career advancement.</p><p>Even with many years of work experience, I realized how little I knew about how the industry works. Every piece of technology has vast technical depth and nuances that are humbling, and many times I only scrape the surface of what lies beneath. Staring into the abyss of the unknown excites me more than it intimidates me. Today, my content channels are merely a way for me to explore that technological abyss and see what lurks beneath the surface.</p><p>I often relate to my PhD years as my best years but I never stopped to question why. Most people speak of it in tortured terms, but I&#8217;ve always looked back fondly at it. I did well too, by any conventional metric. I graduated with well over a dozen journal and conference papers to my name, including an award at an international IEEE conference. This, I naively mistook for intelligence - as if, somehow writing more papers makes me smarter. My fresh academic brain knew no better.</p><p>For a while, I considered getting into academia. I even applied to universities. My wife maintains to this day that I am an excellent teacher; I did the impossible by teaching her mathematics for her GRE. But I had seen the seedy underbelly of university life in my PhD years and it wasn&#8217;t something I was eager to jump into. Industry was the obvious alternative for someone with an engineering degree like me. I entered the workforce in 2011 and it was not the best of times (perhaps it was the worst - for some). I had applied to 77 jobs, got 3 callbacks, and 2 offers. I picked the job in California since the pay was more. The &#8220;spray and pray&#8221; approach worked for me luckily.</p><p>Ever since then, I have genuinely had a lot of fun in the industry for well over a decade, and have met a lot of smart people along the way. At some point, it became clear to me that it was always going to be the &#8220;same thing, different place.&#8221; Nobody tells you how hard it is to change course once you have experience in a certain thing. You&#8217;re stuck doing the same thing forever, even if you don&#8217;t like it anymore, because you&#8217;re the expert now. Some people do however change roles with chameleon-like versatility. I never figured out how.</p><p>I was lucky in a sense. My best professional years were dead centered around the wireless explosion. My skills were perfectly aligned for the 2010s even though everyone told me a decade before that doing electromagnetics for a living was a poor professional choice. When I joined my last employer, 5G was a matter of national concern. In the last 3 years however, the semiconductor industry has been &#8230; different.</p><p>The birth of transformers has led a semiconductor renaissance that I never imagined I would see in my lifetime. The last time semiconductors were this sexy were in the 1970s, when Robert Dennard taught us how to scale transistors. In the modern era, my skills started to seem dated, from a time gone by. I have started to believe that wireless technology is largely a solved problem today. The world of artificial intelligence however, appears ripe with opportunity. The highest agency action I could undertake was to just learn everything about it I could. Not for a job change, not to convince someone, but for the sole reason that it was cool.</p><p>Substack was the perfect motivator. I had to learn something new to write every week - a cadence that I have largely kept up since the birth of my Substack in Jan 2024. As I kept churning out articles, it struck me. The reason I enjoyed my PhD was because I expressed complicated technical knowledge in simple words. My language in papers was not particularly stuffy, and my guess is that reviewers found my papers easy to read. Nearly every paper I wrote was accepted with minimal objections. Perhaps complex technical ideas, curiosity, and language were my <em>Ikigai</em> somehow. An odd combination, but it only makes sense in hindsight.</p><p>Today, I work more than I ever have. But it comes from a place of happiness - of doing what I truly enjoy. Through my writing, I have made so many connections with people that were never possible before. The commonly touted mantra of &#8220;writing makes your life better&#8221; is actually true. I recommend everyone to try some form of it.</p><p>With the support of all my paid subscribers, the newsletter income has grown to a point where I can support my lifestyle by writing on Substack &#8211; to where I can say I have &#8220;enough&#8221; to continue doing this full-time and hopefully have a lot of fun along the way. Thank you to everyone who supports this publication.</p><p>It&#8217;s all about figuring out one thing at a time. I have location and time freedom to a large extent now &#8211; which I fully appreciate. Being my own boss is liberating, but also scary. You stop working, you stop getting paid. I am also hoping that the Substack will continue to grow.</p><p>The <a href="https://www.youtube.com/@SemiDoped">Semi Doped podcast</a> with <a href="https://substack.com/@chipstrat">Austin Lyons</a> has been a fun endeavor, and we plan to grow out this channel with sponsorships and such. The newly-minted <a href="https://www.semidoped.com/">Semi Doped substack</a> is an extension of the podcast &#8211; quick hits on the semi industry, without the depth. We want to grow it to be the <a href="https://www.morningbrew.com/">Morning Brew</a>, or <a href="https://join1440.com/">1440</a>, of semis. Free to read, but sponsor driven. Reach out to us if you&#8217;re interested in sponsoring the Semi Doped newsletter or podcast.</p><p>Finally, I&#8217;m slowly building out <a href="https://www.semiexponent.com/">SemiExponent</a> &#8211; the consulting arm of my semiconductor work as well. If you would like to work with me, feel free to reach out to me at vik (at) semiexponent (dot) com.</p><p>Regular programming for ViksNewsletter will resume next week. Paid subscriptions have been paused, but you can still sign up for the free tier. Everything will resume as normal from April 30th.</p><p>Till then, I&#8217;m off to the beach!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Join me in my adventure.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: TPU v8, Intel Rises Up, Marvell-Polariton]]></title><description><![CDATA[This Week in Chips: Everybody loves a good XPU.]]></description><link>https://www.viksnewsletter.com/p/twic-tpu-v8-intel-rises-up-marvell</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-tpu-v8-intel-rises-up-marvell</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 24 Apr 2026 11:31:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EzH8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A quick personal update: I parted ways with Qualcomm; and will now be focusing my efforts on thinking, writing, and talking about the semiconductor industry. This is a big step for me, and I&#8217;m excited for the future. If there is any interest, I may follow up with a longer post with the thoughts around my decision. Let me know in the comments. </p><p>I am pausing subscriptions for just under a week (April 25th - April 30th) since I am leaving for a week&#8217;s vacation today. Your existing subscription will extend accordingly, and you will still have access to all paid articles. New paid subscribers cannot sign up in that time. There will be no deep-dive and TWiC next week, but there may be a Semi Doped podcast from the beach. &#127754; Not sure yet.</p><div><hr></div><p>Substack deep-dive this past week was about a new optical pluggable format called XPO spearheaded by Arista Networks, and how that matters for CPO.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c3f052d9-7656-4028-a0e8-7a4509886da8&quot;,&quot;caption&quot;:&quot;As GPU counts in AI clusters grow, scale-out networking choices are becoming as critical as the compute itself. Switches like Broadcom&#8217;s Tomahawk 6 (TH6), carry 102.4T of aggregate bandwidth, while TH7 is expected to deliver 204.8T in 2027. This rapid scaling raises the ultimate question for the next generation of data centers: which networking technologies are best suited to handle this bandwidth without breaking the power budget?&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;XPO vs CPO: The Trade-offs Between Speed, Power, and Modularity in Next-Gen AI Networking&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-04-21T14:01:08.745Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ZsCV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/xpo-vs-cpo-the-trade-offs&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:194914698,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:39,&quot;comment_count&quot;:2,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>CPUs for agentic AI are becoming hotter, and demand far outstrips supply at the moment. In light of this, I have removed paywalls from my most widely read, and highest revenue generating post ever on this Substack, so more people may have access and get a sample of what kind of content lies behind paywalled deep dives.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;19c130e2-b4c0-42df-8b3d-95de977f5ab4&quot;,&quot;caption&quot;:&quot;For the better part of two years, CPUs have been an afterthought in AI infrastructure while GPUs got all the attention for training, and more recently inference. In the last 6 months, the rise of agentic AI is proving to be the &#8220;killer app&#8221; that AI inference needed.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The CPU Bottleneck in Agentic AI and Why Server CPUs Matter More Than Ever&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-17T06:01:31.028Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a3b3c30-89da-4ddc-876b-182e455e96e3_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188043920,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:96,&quot;comment_count&quot;:1,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>I wrote about this just about two months ago, when other sharp analysts like <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;FundaAI&quot;,&quot;id&quot;:282947998,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11f71bef-290f-4a13-8637-abb673cea09f_1317x1317.jpeg&quot;,&quot;uuid&quot;:&quot;e0265112-9ffd-43df-be93-f07ea777cd67&quot;}" data-component-name="MentionToDOM"></span> were calling out CPU shortages. The deep dive outlines why agentic AI demands more CPUs. Also, the post has been updated to add comparisons to ARM AGI CPU as well. </p><p>On the Semi Doped podcast, we spoke about Credo acquiring Dust Photonics, more XPO discussions, and a new AI CPU company - NuvaCore.</p><div id="youtube2-aIX5Sku2URI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;aIX5Sku2URI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/aIX5Sku2URI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>We&#8217;re also starting up a Substack presence for Semi Doped where Austin and I will write our commentary on industry goings on as a complement to the podcast. Our intentions are to keep this <strong>entirely free</strong>, less crazy technical, and accessible to a wide range of people interested in keeping a pulse on the semiconductor industry. It&#8217;s meant to be easy reading. Do sign up and tell your colleagues!</p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:8781267,&quot;name&quot;:&quot;Semi Doped&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ObUn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979b934f-dffb-48a2-8597-186738f44571_1024x1024.png&quot;,&quot;base_url&quot;:&quot;https://www.semidoped.com&quot;,&quot;hero_text&quot;:&quot;Semis and AI commentary from Vik Sekar and Austin Lyons&quot;,&quot;author_name&quot;:&quot;Semi Doped&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:null,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.semidoped.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!ObUn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979b934f-dffb-48a2-8597-186738f44571_1024x1024.png" width="56" height="56"><span class="embedded-publication-name">Semi Doped</span><div class="embedded-publication-hero-text">Semis and AI commentary from Vik Sekar and Austin Lyons</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.semidoped.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><p>With all that housekeeping out of the way, let&#8217;s get to the news.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>News roundups are free, but deep-dives are where the real insights live. Upgrade to a paid sub and access 100+ from the archives!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>Google announces TPU v8 chips and new networking architectures</h3><p><a href="https://cloud.google.com/blog/topics/google-cloud-next/welcome-to-google-cloud-next26">Google Cloud Next</a> is the biggest piece of news this week where the company announced their newest line of TPU chips: v8t for training and v8i for inference. This is the first TPU generation where there are two SKUs with different chip specifications. It is now clear that training and inference have different hardware needs: in both chips and networking.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ev6N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ev6N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 424w, https://substackcdn.com/image/fetch/$s_!Ev6N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 848w, https://substackcdn.com/image/fetch/$s_!Ev6N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 1272w, https://substackcdn.com/image/fetch/$s_!Ev6N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ev6N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png" width="858" height="725" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a15af95f-d61b-4373-823f-096dea608296_858x725.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:725,&quot;width&quot;:858,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:92083,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/195312589?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ev6N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 424w, https://substackcdn.com/image/fetch/$s_!Ev6N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 848w, https://substackcdn.com/image/fetch/$s_!Ev6N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 1272w, https://substackcdn.com/image/fetch/$s_!Ev6N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa15af95f-d61b-4373-823f-096dea608296_858x725.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Few interesting things to note:</p><ul><li><p>HBM capacity on the inference chip is higher than training. Memory bandwidth is critical for inference performance as more people and agents use AI. Training is not really general public facing, and you can build out a datacenter with more GPUs even if lesser HBM exists. It&#8217;s still not clear to me why not max out everything.</p></li><li><p>Look at the on chip SRAM on TPU 8i! It&#8217;s 3x higher than the training chip. Having SRAM allows you to automatically provide fast inference without having to use special chips like Groq LPU. The TPU 8i with its SRAM bandwidth can actually decode really fast. I&#8217;ve always maintained that having SRAM is the key here, not strange architectures like Groq&#8217;s VLIW architecture. In any case, Google TPU&#8217;s systolic arrays do provide deterministic behavior like Groq VLIW.</p></li><li><p>Arm Axion CPUs as the head node. This is a chip developed internally by Google based on Arm architecture. More evidence that ISA is not really that important anymore.</p></li></ul><p>Finally, the networking innovations.</p><ul><li><p>At the scale-out level, Google&#8217;s new <strong>Virgo architecture</strong> replaces their Jupiter network by using Optical Circuit Switches (OCS) with high switch radix. This allows a lot of chips to be connect with just two networking layers, greatly reducing latency.</p></li><li><p>At the scale-up level, the training chips still use 3D torus. The interference chip networking now goes to a <strong>Boardfly approach</strong>.</p><ul><li><p>4 TPU 8i chips to a board &#8212; connected by copper</p></li><li><p>8 boards to a rack (also called group) &#8212; connected by AEC</p></li><li><p>36 groups to the pod &#8212; connected by OCS</p></li><li><p>Total 1,152 TPUs per pod</p></li></ul></li></ul><p>The underlying substrate for all of Google&#8217;s TPU networking is OCS. This is a structural change that makes future Google datacenters heavily rely on both optics and copper. I explain the entire networking aspect in-depth on Semi Doped. Look out for the next episode on YouTube.</p><h3>Intel goes from surviving to thriving</h3><p>Just a year ago Intel was struggling for existence. TSMC was about to get control of Intel Fabs and help resurrect the business - a brainchild idea of not-quite-loved ex-chairman of the board Frank Yeary. Even Qualcomm allegedly approached Intel for a takeover. </p><p>In a riches-to-rags-to-riches story - a comeback story that everybody loves - Intel is now in a position of dominance across three different assets: (1) x86 CPU franchise, (2) advanced packaging, and (3) their massive chip manufacturing network.</p><p>The rise of CPUs for agentic AI is a gift that fell in Intel&#8217;s lap because it precisely aligns with their core competence. It is up to execution at this point. TSMC&#8217;s CoWoS is so bottlenecked that Intel&#8217;s EMIB packaging is getting increasingly higher order backlogs. Seemingly, their 18A yields are improving as well. The AI story is playing out because, according to Intel&#8217;s latest earnings call, their AI-driven business represents 60% of revenue and grew 40% YoY.</p><p>It&#8217;s not just CPUs though. Lip-Bu stated on the earnings call that they are &#8220;quietly building up the GPU with a new hire&#8221; &#8212; which refers to <a href="https://www.crn.com/news/components-peripherals/2026/intel-hires-qualcomm-executive-to-lead-gpu-engineering-for-data-centers">Eric Demers being hired as Chief GPU architect</a> earlier this year, who also led Qualcomm&#8217;s GPU efforts before he joined Intel.</p><p>Lip-Bu also has been smitten by the Elon bug as part of the Terafab project, stating on the earnings call:</p><blockquote><p>We are excited to explore innovative ways to refactor silicon process technology, looking for unconventional ways to improve manufacturing efficiency that will eventually lead to a dynamic improvement in the economics of semiconductor manufacturing.</p></blockquote><p>Sure, you do you &#8212; as long as we can get 18A yield up here on earth before we start building fabs in space.</p><h3>Marvell announces acquisition of Polariton for Plasmonics</h3><p>You can&#8217;t take your eye off the optics ball with all these CPU/TPU distractions. Marvell continues its spree of acquiring photonics companies as it continues its march toward next-gen 3.2T networking, by <a href="https://www.marvell.com/company/newsroom/marvell-acquires-polariton-advancing-future-of-optical-connectivity.html">announcing the acquisition</a> of <a href="https://www.polariton.ch/">Polariton Technologies</a> &#8212; a Swiss startup.</p><p>It&#8217;s too late in the week for heavy physics, and we&#8217;ll do that in a future deep dive. Let&#8217;s look at this cool picture first. This is what Polariton does.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EzH8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EzH8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!EzH8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!EzH8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!EzH8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EzH8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg" width="1456" height="862" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:862,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Polariton Technology&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Polariton Technology" title="Polariton Technology" srcset="https://substackcdn.com/image/fetch/$s_!EzH8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!EzH8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!EzH8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!EzH8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38e3c83-8ac6-46b3-949f-6e90539a776a_2280x1350.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Polariton.</figcaption></figure></div><p>You can send an optical signal through an electro-optic dielectric material sandwiched between two metal electrodes. When you provide electrical 0s and 1s on one of the metal electrodes, the electrons on the metal interact with the light flowing through the dielectric waveguide causing Surface Plasma Polaritons (SPPs). These things affect the phase of the light creating highly effective, and very tiny, phase shift modulators. </p><p>The small size of this thing has two advantages:</p><ol><li><p>Low parasitic capacitance, which means you can modulate at high speeds.</p></li><li><p>Much lower power dissipation</p></li></ol><p>It&#8217;s so interesting when wave-particle interaction physics is hitting mainstream applications. I guess we&#8217;re really pushing technology now.</p><div><hr></div><p>Have a great weekend!</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-tpu-v8-intel-rises-up-marvell?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>This post is public so feel free to share it.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-tpu-v8-intel-rises-up-marvell?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/p/twic-tpu-v8-intel-rises-up-marvell?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[XPO vs CPO: The Trade-offs Between Speed, Power, and Modularity in Next-Gen AI Networking]]></title><description><![CDATA[A technical walkthrough of the new XPO form factor, and the scale-out networking economics that shape the CPO vs XPO decision]]></description><link>https://www.viksnewsletter.com/p/xpo-vs-cpo-the-trade-offs</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/xpo-vs-cpo-the-trade-offs</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Tue, 21 Apr 2026 14:01:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZsCV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As GPU counts in AI clusters grow, scale-out networking choices are becoming as critical as the compute itself. Switches like Broadcom&#8217;s Tomahawk 6 (TH6), carry 102.4T of aggregate bandwidth, while TH7 is expected to deliver 204.8T in 2027. This rapid scaling raises the ultimate question for the next generation of data centers: which networking technologies are best suited to handle this bandwidth without breaking the power budget?</p><p>The networking industry has been making rapid strides towards Co-Packaged Optics (CPO) for scale-out networking with external lasers, replaceable 3D optical engines, and per-lane data rates of 200G or more. Traditional arguments against CPO such as laser failures and serviceability are no longer show-stoppers, as explained later in this post. Still, CPO is an architectural reset that upends the traditional pluggable optics market and consolidates various networking components into the hands of a few suppliers like Nvidia, Broadcom, and Marvell.</p><p>This paradigm shift puts immense pressure on pluggable transceiver manufacturers like Eoptolink, and system integrators like Arista Networks. As optical functions shift to <a href="https://www.viksnewsletter.com/p/a-comprehensive-primer-on-advanced-packaging">advanced silicon packaging using TSMC&#8217;s CoWoS</a> and COUPE, the value chain moves away from traditional PCB assembly and rack integration. In a defensive play to protect their margins and market share, these players are responding with a new format of their own.</p><p>At OFC 2026, Arista Networks introduced <a href="https://www.arista.com/en/company/news/press-release/23697-pr-20260311">eXtra-dense Pluggable Optics (XPO)</a> for AI data centers. XPO keeps the existing pluggable approach, but scales it for higher bandwidth and better thermal management. The XPO multi-source agreement (MSA) now has <a href="https://www.msn.com/en-us/money/other/arista-s-xpo-product-sees-over-100-agreement-partners-bnp-paribas/ar-AA20hmeZ?uxmode=ruby">&gt;100 participating companies</a>, including Marvell (playing both CPO and pluggable sides), Lumentum, Coherent, and Eoptolink - a sign that the industry is serious about this.</p><p><strong>In this article, we&#8217;ll cover three things:</strong></p><ol><li><p>A primer on the XPO format and how it solves the faceplate bottleneck</p></li><li><p>The three-way trade-off between speed, power, and modularity that decides CPO versus XPO at each generation</p></li><li><p>Why CPO and XPO formats will coexist in scale-out at 1.6T but diverge as lane speeds climb to 3.2T and beyond.</p></li></ol><p>We&#8217;ll conclude with a view on where each format lands across scale-up, scale-out, and scale-across, and why multiple strategies will likely coexist for years.</p><p>This post will have a lot of optical terminology. If you are unfamiliar with it, please see an earlier post below that explains most of what you need to know.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;4e268073-f4cc-42cb-b36a-04e5694f8538&quot;,&quot;caption&quot;:&quot;Welcome to a &#128274; subscriber-only deep-dive edition &#128274; of my weekly newsletter. Each week, I help investors, professionals and students stay up-to-date on complex topics, and navigate the semiconductor industry.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;A Complete Guide to Optical Transceiver Nomenclature&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-11-24T18:29:36.626Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ieoZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5186814b-1b9c-4f79-b5f1-cf551ab803b8_1190x578.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/a-complete-guide-to-optical-transceiver-nomenclature&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:179819530,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:29,&quot;comment_count&quot;:3,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Vik's Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you are not a paid subscriber, you can purchase just this article using the button below. You can find the whole catalog of articles for purchase <a href="https://www.semiexponent.com/products">at this link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/xpo-vs-cpo-the-trade-offs-between-speed-power-and-modularity-in-next-gen-ai-networking&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/xpo-vs-cpo-the-trade-offs-between-speed-power-and-modularity-in-next-gen-ai-networking"><span>Buy the ebook</span></a></p><div><hr></div><h3>How XPO Solves the Faceplate Bottleneck</h3><p>As switch ASICs scale toward 204.8T, the physical real estate of the switch faceplate becomes a severe bottleneck. The traditional pluggable model simply runs out of physical space. Consider the math for outfitting a 204.8T switch with standard 1.6T OSFP modules: a standard 1OU (Open Rack Unit) slot accommodates about 32 OSFP plugs, delivering 51.2T of bandwidth. Fully outfitting a 204.8T switch therefore requires 4OU of rack space purely for front-panel connections.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bg0y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bg0y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 424w, https://substackcdn.com/image/fetch/$s_!Bg0y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 848w, https://substackcdn.com/image/fetch/$s_!Bg0y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 1272w, https://substackcdn.com/image/fetch/$s_!Bg0y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bg0y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png" width="1456" height="723" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:723,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Bg0y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 424w, https://substackcdn.com/image/fetch/$s_!Bg0y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 848w, https://substackcdn.com/image/fetch/$s_!Bg0y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 1272w, https://substackcdn.com/image/fetch/$s_!Bg0y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6b00270-9471-4bbb-9d79-3e7f89656b45_2048x1017.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Arista</figcaption></figure></div><p>XPO alters this physical footprint entirely. An XPO module delivers 12.8T of bandwidth, or eight times the capacity of a 1.6T OSFP. While the XPO form factor is roughly 2.7x wider (allowing only 16 plugs per 1OU), those 16 plugs can deliver the full 204.8T in that single rack unit.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZsCV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZsCV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 424w, https://substackcdn.com/image/fetch/$s_!ZsCV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 848w, https://substackcdn.com/image/fetch/$s_!ZsCV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 1272w, https://substackcdn.com/image/fetch/$s_!ZsCV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZsCV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png" width="726" height="520.0673076923077" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1043,&quot;width&quot;:1456,&quot;resizeWidth&quot;:726,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZsCV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 424w, https://substackcdn.com/image/fetch/$s_!ZsCV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 848w, https://substackcdn.com/image/fetch/$s_!ZsCV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 1272w, https://substackcdn.com/image/fetch/$s_!ZsCV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34fea9cb-cd97-403c-904d-ab84372a8af6_1684x1206.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Arista</figcaption></figure></div><p><strong>The core advantages of XPO:</strong></p><ul><li><p><strong>4x Density Gain:</strong> It reclaims 3OU of highly valuable rack space per switch, reallocating it back to CPUs and GPUs at the datacenter level.</p></li><li><p><strong>Reduced Component Failure Domain:</strong> Just 16 XPO plugs replace the DSP, transimpedance amplifier (TIA - that amplifies the detected signal), driver, and photodetector components that would otherwise be spread across 128 OSFP plugs.</p></li><li><p><strong>Supply Chain Continuity:</strong> It retains the external, modular approach hyperscalers are accustomed to. MSAs ensure a diverse supply chain, and the format supports any optical interconnect reach (DR, LR, ZR, ZR+) and technology (IMDD (<a href="https://en.wikipedia.org/wiki/Intensity_modulation">intensity modulation direct-detection</a>), coherent, coherent-lite, micro-LEDs, and RF).</p></li></ul><h3>DSP Power and Cooling in XPO</h3><p>The primary downside to XPO&#8217;s pluggable format is dealing with signal degradation over copper interconnects between the faceplace and switch ASIC using power-hungry DSP chips. XPO offers three implementation paths to manage this, in decreasing levels of power consumption:</p><ol><li><p><strong>Full Retimed:</strong> Utilizes a DSP on both the transmit and receive sides (highest power).</p></li><li><p><strong>Linear Receive Optics (LRO / Half-Retimed):</strong> Utilizes a DSP only on the transmit side, saving power while maintaining signal integrity.</p></li><li><p><strong>Linear Pluggable Optics (LPO):</strong> Eliminates the DSP entirely, relying only on drivers and equalizers, leaving the heavy lifting to the switch ASIC (lowest power).</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BrFJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BrFJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 424w, https://substackcdn.com/image/fetch/$s_!BrFJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 848w, https://substackcdn.com/image/fetch/$s_!BrFJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 1272w, https://substackcdn.com/image/fetch/$s_!BrFJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BrFJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png" width="384" height="512" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:512,&quot;width&quot;:384,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BrFJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 424w, https://substackcdn.com/image/fetch/$s_!BrFJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 848w, https://substackcdn.com/image/fetch/$s_!BrFJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 1272w, https://substackcdn.com/image/fetch/$s_!BrFJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8075fd9e-4cc5-4d21-890d-c607707b4a87_384x512.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Fully retimed, LRO, and LPO approaches. The difference lies in where the DSP is used. Source: Dustphotonics</figcaption></figure></div><p>Regardless of the DSP architecture, XPO&#8217;s immense bandwidth density makes traditional air cooling impossible. <strong>Liquid cooling is essential.</strong> Each XPO plug features an integrated cold plate designed to dissipate roughly 400W. While some legacy solutions attempt to bolt cold plates onto the exterior of OSFP plugs, XPO&#8217;s natively integrated liquid cooling is vastly more efficient. This also implies that the racks in the datacenter using XPO need to have liquid-cooling available, which might be difficult in legacy infrastructure.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!icpK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!icpK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 424w, https://substackcdn.com/image/fetch/$s_!icpK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 848w, https://substackcdn.com/image/fetch/$s_!icpK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 1272w, https://substackcdn.com/image/fetch/$s_!icpK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!icpK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png" width="1456" height="771" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:771,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!icpK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 424w, https://substackcdn.com/image/fetch/$s_!icpK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 848w, https://substackcdn.com/image/fetch/$s_!icpK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 1272w, https://substackcdn.com/image/fetch/$s_!icpK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F487e35c8-bebf-4c22-a3df-2d7062f82924_2048x1084.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Arista</figcaption></figure></div><p>In comparison, CPO boasts the lowest power consumption per port across the board. Because optical-to-electrical conversion happens directly adjacent to the switch silicon, CPO bypasses the need for heavy external DSPs. How XPO ultimately stacks up against CPO economically depends heavily on which of the three XPO implementations - Full, LRO, or LPO - a data center adopts.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/dvBxW/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68764b59-5bab-4c9f-868e-cdfb283b3915_1220x1152.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4516bb79-7ce6-49a3-b608-c614221568eb_1220x1222.png&quot;,&quot;height&quot;:615,&quot;title&quot;:&quot;Energy Efficiency: 1.6T/3.2T Scale-Out Optics&quot;,&quot;description&quot;:&quot;Create interactive, responsive &amp; beautiful charts &#8212; no code required.&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/dvBxW/1/" width="730" height="615" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p><em>Approximate numbers; varies based on vendor and implementation.</em></p><p>That sets up the real question: which format actually wins, and where? The answer depends on how hyperscalers weigh three trade-offs that diverge as we push to higher lane speeds: speed, power, and modularity.</p><p><strong>After the paywall:</strong></p><ul><li><p>A deep dive into the trilemma of speed, power, and modularity that dictates whether CPO or XPO wins in future networks.</p></li><li><p>A detailed breakdown of CPO&#8217;s superior energy efficiency, how serviceability concerns are solved, and its market impact on the pluggable optics ecosystem.</p></li><li><p>An analysis of the XPO-LPO and XPO-LRO &#8220;Goldilocks&#8221; approaches for 1.6T networking, and their limitations on lane speed and supply chain concentration.</p></li><li><p>A look at the divergence point: why XPO becomes power-prohibitive at 3.2T and beyond, making CPO the more attractive future-proof architecture.</p></li><li><p>The final verdict on where CPO makes the most sense (scale-out) and why XPO is the natural fit for scale-across networks.</p></li></ul>
      <p>
          <a href="https://www.viksnewsletter.com/p/xpo-vs-cpo-the-trade-offs">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: Credo eats Dust, CPU and NAND shortage, Largan → CPO, Nuvacore]]></title><description><![CDATA[This week in chips: Photonics = &#10084;&#65039;, shortages abound, and a CPU startup.]]></description><link>https://www.viksnewsletter.com/p/twic-credo-eats-dust-cpu-and-nand</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-credo-eats-dust-cpu-and-nand</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 17 Apr 2026 11:31:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5a45bbf9-aabe-4cb1-8bfb-3ccfbd2384d8_800x533.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Plenty of photonics news this week as we continue to push optics in datacenters. Plus thoughts on storage, CPUs, and other strange bits and bobs.</p><p>Here&#8217;s what you might have missed this week:</p><ul><li><p><a href="https://open.substack.com/pub/viksnewsletter/p/tokenmaxxing-and-the-token-value-chain?utm_campaign=post-expanded-share&amp;utm_medium=web">Tokenmaxxing and the Token Value Chain</a></p></li><li><p><a href="https://www.youtube.com/watch?v=rFf8ABolyi0">Is Intel Finally Back with a $300B market cap? OpenClaw can Dream?</a></p></li></ul><p>Now, on to the news.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>News roundups are free, but deep-dives are where the real insights live. Upgrade to a paid sub and access 100+ from the archives!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>Credo agrees to acquire Dust Photonics</h4><p>This is perhaps the biggest piece of news this week with many substacks covering it. We won&#8217;t get into the depths of the deal because a simple search will uncover a wealth of takes which you can read. I&#8217;ll only touch on a few important points here.</p><p>Dust Photonics provides photonic ICs which integrates nicely into Credo&#8217;s SerDes experience making them a full-stack interconnect provider &#8212; from active electrical cables, pluggable optics, microLED cables, and now integrated photonics. This latest acquisition makes Credo an interconnect powerhouse in the AI era. </p><p>I&#8217;ve also maintained that Credo&#8217;s &#8220;zero-flap&#8221; metrology for link health is one of their biggest strengths, and is a software layer that really makes the interconnect layer work at scale. You&#8217;ll also notice that Marvell has a lot of parallels in the interconnect space here; SerDes expertise, optical DSP, photonics, and telemetry systems. Credo is a smaller player, and gives the big dog (MRVL) a run for its money.</p><p>I really like Gavin Baker&#8217;s (who runs Atreides Management, an investor in Dust Photonics) explanation of the acquisition in his tweet which you should read in its entirety.  </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/GavinSBaker/status/2044410644301046031?s=20&quot;,&quot;full_text&quot;:&quot;Nice to see Credo acquire <span class=\&quot;tweet-fake-link\&quot;>@Atreidesmgmt</span> portfolio company DustPhotonics. Great team, great company.\n&nbsp;\nDust designs photonic ICs (PICs) and engines that are foundational to Silicon Photonics.&nbsp;Silicon Photonic pluggables integrate multiple optical functions &#8211; traditionally&quot;,&quot;username&quot;:&quot;GavinSBaker&quot;,&quot;name&quot;:&quot;Gavin Baker&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1396219525754937345/5L4n5L3O_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-15T13:40:55.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:26,&quot;retweet_count&quot;:28,&quot;like_count&quot;:424,&quot;impression_count&quot;:81999,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>The most important take-aways from what Gavin said is that copper and optics is <em>not </em> a zero-sum game. Copper will remain relevant well into the 2030s, and optics will play an increasing role in datacenters. He identifies scale-up as the area for most networking growth (note that it need not be optical), followed by scale-across (this was surprising to me), and finally scale-out (which might involve CPO at some point). All this while laser shortage is getting worse!</p><p>He calls out that Credo&#8217;s Hyperlume acquisition was a smart bet, and that there is a lot of positive feedback about MicroLEDs as viable interconnect tech. We&#8217;ve started covering microLEDs on this newsletter (free post below). Also, the upcoming Semi Doped episode has a longer discussion on the Credo acquisition. Stay tuned!</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;6954455a-56f7-4f76-9f36-9453dd79d61f&quot;,&quot;caption&quot;:&quot;There has been considerable chatter about Lumentum&#8217;s lasers and their narrow linewidth, contrasted with questions about the viability of microLEDs in datacenters. I&#8217;ve been meaning to write about microLEDs for quite a while; we&#8217;ll cover some general aspects here. Since MicroLEDs are a big topic, this is best handled as a sequence of posts that build up &#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;MicroLEDs vs. Lasers: The Linewidth Tradeoff&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-04-07T16:40:58.711Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8d51b49-98b0-4fab-8148-53cb04517219_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:193481292,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:31,&quot;comment_count&quot;:13,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>(via <a href="https://investors.credosemi.com/news-events/news/news-details/2026/Credo-Agrees-to-Acquire-DustPhotonics-Accelerating-Expansion-into-Silicon-Photonics-and-Next-Generation-Optical-Connectivity/default.aspx">Credo</a>)</p><h4>Thoughts on the CPU Shortage</h4><p>We&#8217;ve covered the need for CPUs in agentic AI in quite some depth, and the article below has been the most popular, highest earning article YTD. We called out early that CPU:GPU ratios are going to go to one, or higher.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;16cc3170-83bf-421b-bb7b-4a03ad824887&quot;,&quot;caption&quot;:&quot;For the better part of two years, CPUs have been an afterthought in AI infrastructure while GPUs got all the attention for training, and more recently inference. In the last 6 months, the rise of agentic AI is proving to be the &#8220;killer app&#8221; that AI inference needed.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The CPU Bottleneck in Agentic AI and Why Server CPUs Matter More Than Ever&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-17T06:01:31.028Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a3b3c30-89da-4ddc-876b-182e455e96e3_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188043920,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:81,&quot;comment_count&quot;:1,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;1bbf5407-d4d0-445d-b8cd-bab289179bc9&quot;,&quot;caption&quot;:&quot;In last week&#8217;s article, we discussed why CPUs are critical in the era of agentic AI. The &#8220;operational burden&#8221; of the orchestrating AI tasks falls on the CPU, which affects GPU utilization rates and eventually TCO. Check out the whole post.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The AI Datacenter CPU Yellow Pages&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-24T13:21:40.703Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/754d30f1-1b70-4131-b083-869177b41afb_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/the-ai-datacenter-cpu-yellow-pages&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188873398,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:39,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Since the two months that the article was published, the CPU shortage has gotten worse &#8212; to the point that cloud service providers that have nothing to do with AI inference are falling short of CPUs. The x86 vs ARM argument does not really even matter anymore. CSPs and hyperscalers will take anything they can lay their hands on, and port over software if they have to.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/pradeeepk/status/2043901994738561423?s=20&quot;,&quot;full_text&quot;:&quot;OpenAI ported their entire code base to ARM so that they can use $AMZN graviton CPUs as CPUs capacity is very short\n\nsource <span class=\&quot;tweet-fake-link\&quot;>@dylan522p</span> &quot;,&quot;username&quot;:&quot;pradeeepk&quot;,&quot;name&quot;:&quot;MacroValue&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2041817114223505408/QqOEu3Xi_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-14T03:59:44.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HF1ldBSaIAEzcZk.png&quot;,&quot;link_url&quot;:&quot;https://t.co/pOyole5BFQ&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:14,&quot;retweet_count&quot;:42,&quot;like_count&quot;:343,&quot;impression_count&quot;:235555,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>The situation is dire enough that <strong>the winner in CPUs is anyone who will execute on the manufacturing side</strong> and provide sufficient CPU supply. Things like core count vs IPC, or reasoning vs action workloads, or what CPU is best suited for an end application is all moot now. Sadly, most of the supply chain squeeze for CPUs also depends on what TSMC can deliver, and their ~$60B 2026 capex spend has been argued to still be too conservative for the number of chips we need. The ace in the hole would be if Intel can deliver server CPUs at scale purely on 18A. </p><h4>Nuvacore:  Engineered for Altitude</h4><p>Speaking of CPUs, Nuvacore is a new CPU company funded by Sequoia capital, with the promise below:</p><blockquote><p>A general-purpose CPU core designed to excel everywhere&#8212;from core infrastructure to advanced AI systems, including the continuous demands of agentic computing.<br><br><strong>This is not iteration, it is a complete rewrite of the rules of silicon.</strong></p></blockquote><p>The brains behind this operation are Gerard Williams, John Bruno, and Ram Srinivasan, whom you may know from Nuvia - a company Qualcomm bought in 2021 and used their IP for their Oryon series of desktop ARM processors. The ARM licensing agreement used in this product was a matter of big contention, which was <a href="https://investor.qualcomm.com/news-events/press-releases/news-details/2025/Qualcomm-Achieves-Complete-Victory-Over-Arm-in-Litigation-Challenging-Licensing-Agreements/default.aspx">hotly contested</a> in courts. All three founders left Qualcomm earlier this year, and Nuvacore was born.</p><p>Here is what is exciting about this news: Nuvacore promises to rethink how CPUs are built for everything including the agentic AI era. While an earlier post on this newsletter discussed the &#8220;most desirable&#8221; features of an agentic CPU across 9 different metrics, it is interesting what these CPU legends think is important for AI. More cores? More performance per core? Special architectural decisions? We have to wait and see.</p><p>(via <a href="https://www.nuvacore.ai/">nuvacore.ai</a>)</p><h4>NAND Shortage is Real; Phison fires warning shot</h4><p>Digitimes reports that Phison CEO, Pua Khein-Seng, warned that NAND flash shortage is going to <em>significantly</em> worsen towards the end of 2026. It has led the company to break its 26 year old tradition of &#8220;no-debt&#8221; by fundraising a total of $1.4B for 2026. Supply tightness is expected to extend well into 2028, possibly beyond.</p><p>Note that Kioxia recently <a href="https://www.tomshardware.com/pc-components/ssds/kioxia-discontinues-2d-nand-products-last-shipments-to-be-made-in-2028-1980s-planar-nand-memory-reaches-end-of-life">ended the production</a> of older SLC and MLC NAND to prioritize higher margin TLC and QLC NAND SSDs for AI applications. I&#8217;ve had multiple discussions with professional investors this week about NAND, and I&#8217;ll be digging deeper into the demand for NAND in a future newsletter article. If you need a refresher, check out an older post.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;8eb5cb5b-28c1-4754-a597-e0d2f70b8af3&quot;,&quot;caption&quot;:&quot;Welcome to a &#128274; subscriber-only deep-dive edition &#128274; of my weekly newsletter. Each week, I help investors, professionals and students stay up-to-date on complex topics, and navigate the semiconductor industry.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Role of Storage in AI, Primer on NAND Flash, and Deep-Dive into QLC SSDs&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-10-19T18:29:19.395Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77b0e7d9-0b17-4be3-a282-9c92876c64a2_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/role-of-storage-in-ai-primer-on-nand&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:176483956,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:50,&quot;comment_count&quot;:2,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>(via <a href="https://www.digitimes.com/news/a20260413PD227.html">digitimes</a>)</p><h4>Largan won&#8217;t make camera lenses for Apple, becomes CPO pilled</h4><p>I like it when strange things happen &#8212; things outside the norm. In what sounds like a little guy finally standing up to the big bully, Largan Precision has said that it won&#8217;t increase order volume for the Apple iPhone 18 Pro, instead choosing to focus on optics for CPO.</p><p>Largan is a major provider of optics to Apple, but the recent move against Apple indicates an aggressive business pivot from a high volume, low margin smartphone business to high margin, potentially hyper-growth AI optics sector.</p><p>It&#8217;s a high risk gamble especially since Apple is buying up all the DRAM available at elevated prices to lock others out of the Android market. This shows that Apple is pretty confident that the memory prices are not going to affect their smartphone sales. Yet, Largan is willing to walk away, leaving the pieces to be picked up by competitors such as Sunny Optical (2382.HK) and Genius Electronic Optical (GSEO/TPE:3406). Incidentally, GSEO got a nice stock price bump right after the news of the rebuke broke. Not entirely sure if these companies can fill Largan shoes for iPhone orders.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fmOf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fmOf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 424w, https://substackcdn.com/image/fetch/$s_!fmOf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 848w, https://substackcdn.com/image/fetch/$s_!fmOf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 1272w, https://substackcdn.com/image/fetch/$s_!fmOf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fmOf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png" width="651" height="354" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:354,&quot;width&quot;:651,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:34082,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/194365241?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fmOf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 424w, https://substackcdn.com/image/fetch/$s_!fmOf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 848w, https://substackcdn.com/image/fetch/$s_!fmOf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 1272w, https://substackcdn.com/image/fetch/$s_!fmOf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f03b01b-d276-4a5e-8aca-4d6405c5aa71_651x354.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>(via <a href="https://wccftech.com/apples-incessant-demands-allegedly-rebuffed-by-a-key-iphone-18-supplier/amp/">wccftech</a>)</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-credo-eats-dust-cpu-and-nand?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>This post is public so feel free to share it.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-credo-eats-dust-cpu-and-nand?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/p/twic-credo-eats-dust-cpu-and-nand?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div>]]></content:encoded></item><item><title><![CDATA[Tokenmaxxing and the Token Value Chain]]></title><description><![CDATA[Who profits from tokenmaxxing and who gets stuck with the bill.]]></description><link>https://www.viksnewsletter.com/p/tokenmaxxing-and-the-token-value-chain</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/tokenmaxxing-and-the-token-value-chain</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Tue, 14 Apr 2026 13:09:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a2669219-cde0-4ed2-b8c0-d1c96aba9a7a_1066x591.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This piece is different from the usual hardware deep-dives I write, but I wanted to express some thoughts on <a href="https://www.forbes.com/sites/richardnieva/2026/03/31/the-ai-gods-spending-as-much-as-they-can-on-ai-tokens/">Tokenmaxxing - Silicon Valley&#8217;s newest trend</a>, because token usage is fast becoming a proxy for productivity, much like lines-of-code in the dot-com coding era.</p><p>Meta&#8217;s now-defunct &#8220;Claudeonomics&#8221; leaderboard ranked employees by token usage, while celebrating power users with titles like &#8220;Token Legend.&#8221; In a 30 day window, company-wide consumption topped 60 trillion tokens, with the top contender alone burning 281 billion. CTO Andrew Bosworth bragged publicly that his best engineer was spending the equivalent of his salary in tokens and getting up to 10x more productivity out of it. How productivity was measured remains unknown. On the All-In podcast at GTC, Jensen said that if his $500K engineer was not consuming at least $250K worth of tokens, he would be &#8220;deeply alarmed.&#8221; Silicon Valley engineers tell me their companies are pushing the same tokenmaxxing message internally. But if the number of tokens used becomes the target, it stops being a useful metric (<a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart&#8217;s law</a>).</p><p>Here&#8217;s what you have to pay attention to: the tokenmaxxing message applies to those who are able to establish a linear (preferably exponential) relationship between outcome and token usage. Tokenmaxxing can be rational, but not always.</p><p>Foundation model providers like Anthropic and OpenAI have a clean, understandable revenue metric (pay-per-token). Meta has massive internal workloads like serving ads that require internal token consumption. <a href="https://x.com/garrytan/status/2017387579377861010?s=20">Garry Tan</a> wants software startups built faster at YC. Nvidia and AMD simply want to sell more hardware.</p><p>In this post, we will examine the promise of the Token Value Chain, how reality is more nuanced, who stands to benefit, and who stands to lose. More importantly, we will discuss what a &#8220;smart token&#8221; strategy should look like for companies looking to go AI-native, and when tokenmaxxing is a viable strategy.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Vik's Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you are not a paid subscriber, you can purchase just this article using the button below. You can find the whole catalog of articles for purchase <a href="https://www.semiexponent.com/products">at this link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/tokenmaxxing-and-the-token-value-chain&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/tokenmaxxing-and-the-token-value-chain"><span>Buy the ebook</span></a></p><div><hr></div><h3>The Token Value Chain</h3><p>The token value chain is simple: massive amounts of wealth injected into the top of the chip manufacturing supply chain will flow through token providers, and ultimately manifest as profit for enterprises adopting AI through dramatic increases in productivity (products shipped, revenue earned, etc.).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Loiu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Loiu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 424w, https://substackcdn.com/image/fetch/$s_!Loiu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 848w, https://substackcdn.com/image/fetch/$s_!Loiu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 1272w, https://substackcdn.com/image/fetch/$s_!Loiu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Loiu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png" width="529" height="380.0343866171004" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:773,&quot;width&quot;:1076,&quot;resizeWidth&quot;:529,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Loiu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 424w, https://substackcdn.com/image/fetch/$s_!Loiu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 848w, https://substackcdn.com/image/fetch/$s_!Loiu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 1272w, https://substackcdn.com/image/fetch/$s_!Loiu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6be73dcb-3ea8-497c-ae44-0baf6614fcf4_1076x773.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: ViksNewsletter</figcaption></figure></div><p>The Token Value Chain has three levels:</p><ol><li><p><strong>Token Supplier</strong>: Nvidia and AMD sit at the top of this tier selling GPUs, but the real supplier base is much wider: CPUs, memory/storage, interconnects (copper/optics), PCBs, and power/cooling. Neoclouds like CoreWeave and Nebius belong here too, renting out GPU capacity by the hour. Every single token burned is a pick-and-shovel sale for this tier. Revenue scales with volume, supply chain and power bottlenecks not included.</p></li><li><p><strong>Token Provider</strong>: Anthropic and OpenAI do most of the work, with Google, xAI and a handful of others in the mix. These are the companies that take raw compute from the Supplier tier and turn it into tokens sold on a pay-per-use basis. Every token they produce is money, which is why Anthropic&#8217;s revenue run rate has gone from $1 billion to $30 billion in about 15 months. Revenue here also scales with volume. The more their customers tokenmaxx, the better the numbers look.</p></li><li><p><strong>Token Consumer</strong>: Enterprises, startups, and increasingly every Fortune 500 company that has been told to go AI-native. It is also the only tier in the value chain whose revenue is not tied to token volume &#8211; at least not directly, and not always.</p></li></ol><p>The promise of AI is that massive amounts of investments into the Supplier tier translates into more tokens generated by Providers, which ultimately leads to massive profitability at the Consumer level, which will then recursively feed the massive infrastructure spend at the Supplier tier. If you look closer, the Provider layer revenue is tied to token volume, which translates to volume in Giga-Watts deployed at the Supplier tier. But the Consumer pays the ultimate token price.</p><p>Gavin Baker puts this eloquently in <a href="https://x.com/NVIDIADC/status/2043751071600759099?s=20">an interview at Nvidia GTC</a>.</p><blockquote><p>If you are not a token producer, your fate as a business is going to be entirely determined by how efficiently and effectively you consume tokens as an organization.</p></blockquote><p>In other words, while the cost per token used is pre-determined, the revenue generated per token used is anybody&#8217;s guess. While both the Supplier and Provider win on volume, the Consumer has to win on outcome. If you view tokens as the raw materials required to produce a product, then there are two approaches to run the business:</p><ol><li><p><strong>Use as little raw material as possible to produce products, and increase margins</strong>. A token-constrained consumer would use an intentional token deployment strategy and ride the efficient frontier of intelligence per token used. A number of &#8220;smart token strategies&#8221; can be deployed to get the &#8220;most bang for token buck.&#8221;</p></li><li><p><strong>Use as much raw material as possible to generate as many products as possible and corner the market</strong>. A token-rich consumer would deploy an infinite number of tokens of a leading intelligence model without regard to token cost, to produce remarkable results; just like <a href="https://en.wikipedia.org/wiki/Infinite_monkey_theorem">infinite monkeys with a typewriter can produce Shakespeare</a>. In theory, infinite intelligence (and money) would automatically lock out competitors, leaving token-constrained entities unable to compete. This is tokenmaxxing as a strategy.</p></li></ol><p>We&#8217;ll expand on these two ideas after the paywall.</p>
      <p>
          <a href="https://www.viksnewsletter.com/p/tokenmaxxing-and-the-token-value-chain">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: Intel with TeraFab+Sambanova+Google, Samsung and Anthropic 🚀, AGI CPU in China]]></title><description><![CDATA[This week in chips: Is Intel back?]]></description><link>https://www.viksnewsletter.com/p/twic-intel-with-terafabsambanovagoogle</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-intel-with-terafabsambanovagoogle</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 10 Apr 2026 09:39:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oC6_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Intel&#8217;s market cap crosses $300B, up 3x in a mere 6-9 months. As TSMC CoWoS still remains constrained,  Intel&#8217;s EMIB is picking up steam through 2026, and positive Intel vibes around news cycles have led us to ask - &#8220;Is Intel back?&#8221; In other news, Anthropic annualized revenue rates are through the roof, Samsung profits are insane, and Arm finds a China loophole.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oC6_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oC6_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 424w, https://substackcdn.com/image/fetch/$s_!oC6_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 848w, https://substackcdn.com/image/fetch/$s_!oC6_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 1272w, https://substackcdn.com/image/fetch/$s_!oC6_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oC6_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png" width="1456" height="641" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:641,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Intel&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Intel" title="Intel" srcset="https://substackcdn.com/image/fetch/$s_!oC6_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 424w, https://substackcdn.com/image/fetch/$s_!oC6_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 848w, https://substackcdn.com/image/fetch/$s_!oC6_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 1272w, https://substackcdn.com/image/fetch/$s_!oC6_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1980b265-62e8-4ad6-90e0-6498ae4a4d5b_2458x1082.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Intel</figcaption></figure></div><p>We&#8217;ve covered a lot this week, and in case you haven&#8217;t had a chance to catch up.</p><ul><li><p><a href="https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff?r=222kot">MicroLEDs vs Lasers: The Linewidth Tradeoff</a> &#8212; why linewidth matters and how LEDs and lasers differ (no paywall this week, enjoy!).</p></li><li><p>Busy week on the Semi Doped podcast (which crossed 10K downloads!):</p><ul><li><p><a href="https://youtu.be/oWQG207QPvk?si=PwR-dTGUyfGjJwXH">NVIDIA&#8217;s Marvell Strategy, Is Memory Different This Time?, Intel&#8217;s Ireland Fab</a></p></li><li><p><a href="https://youtu.be/C5SZWb16SFY?si=byr_STtd5QOij9SQ">$300M for 70K Viewers | Intel x Elon, OpenAI x TBPN, Citrini&#8217;s Strait of Hormuz Stunt</a></p></li><li><p><a href="https://youtu.be/7Ph9i1KYHxY?si=wGc2vNO7CCF5bMVm">Austin interviews Reiner Pope from MatX</a></p></li></ul></li></ul><p>Alright, let&#8217;s dive into the news.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>News roundups are free, but deep-dives are where the real insights live. Upgrade to a paid sub and access 100+ from the archives!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>Intel Joins Musk&#8217;s TeraFab Project</h4><p>Intel announced it will join Elon Musk's TeraFab project alongside SpaceX, xAI, and Tesla. The Terafab project aims to produce 1TW of compute capacity annually and Intel would bring its expertise in advanced nodes, packaging, and silicon manufacturing at scale.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/intel/status/2041501301318766866?s=20&quot;,&quot;full_text&quot;:&quot;Intel is proud to join the Terafab project with <span class=\&quot;tweet-fake-link\&quot;>@SpaceX</span>, <span class=\&quot;tweet-fake-link\&quot;>@xai</span>, and <span class=\&quot;tweet-fake-link\&quot;>@Tesla</span> to help refactor silicon fab technology.\n\nOur ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab&#8217;s aim to produce 1 TW/year of compute to power &quot;,&quot;username&quot;:&quot;intel&quot;,&quot;name&quot;:&quot;Intel&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1301199561713541120/7dPeX1gK_normal.png&quot;,&quot;date&quot;:&quot;2026-04-07T13:00:14.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HFTb6BbbcAANe5y.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/2vUmXn0YhH&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:1154,&quot;retweet_count&quot;:2359,&quot;like_count&quot;:17047,&quot;impression_count&quot;:23554301,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>If Musk actually delivers on even 10% of TeraFab's ambition, Intel secures a whale customer, a strategic win, and a major turnaround. For the Terafab project, Intel being involved actually brings in a practical level of feasibility than Tesla/SpaceX/xAI trying to build a leading-edge fab from scratch. </p><p>This is a win-win for both parties involved but will have material impact only when we see significant wafer starts per month from Intel towards the Terafab project.</p><h4>Heterogenous systems for Agentic AI: Intel + Sambanova</h4><p>On other Intel news, from the press release,</p><blockquote><p>SambaNova today announced the next phase of its collaboration with Intel: a heterogeneous hardware solution that combines GPUs for prefill, Intel&#174; Xeon&#174; 6 processors as both host and &#8220;action&#8221; CPUs, and SambaNova RDUs for decode to deliver premium inference for the most demanding Agentic AI applications.</p></blockquote><p>Just like we saw Nvidia pair Rubin GPUs with Groq LPUs and Vera CPUs, Intel and Sambanova are collaborating to provide specialized decode hardware and CPUs. Their comparison table below is interesting and points out something important: <strong>the number of decode chips required</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7i4c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7i4c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7i4c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7i4c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7i4c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7i4c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg" width="960" height="544" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:544,&quot;width&quot;:960,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;intel-sn50&quot;,&quot;title&quot;:&quot;intel-sn50&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="intel-sn50" title="intel-sn50" srcset="https://substackcdn.com/image/fetch/$s_!7i4c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7i4c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7i4c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7i4c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43dc8085-5ce8-4cef-8076-adf77f46950e_960x544.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Sambanova</figcaption></figure></div><p>One of the major limitations of LPUs is its limited SRAM and the need to deploy thousands of chips for large models. The complex compiler architecture of Groq chips means its not easy to add HBM/DRAM to expand capacity at the expense of speed. This inelasticity is why I have believed that Groq LPU was never a good fit long term, regardless of how much Nvidia paid for them.</p><p>Sambanova&#8217;s SN50 RDU architecture uses SRAM, HBM and DRAM, but their secret sauce is to &#8220;predetermine&#8221; where each piece of the information is going to come from in advance. The RDU stands for Reconfigurable Data Unit, and it establishes a data path between different heterogeneous storage media to access data. This makes it vastly more flexible than Groq&#8217;s static approach. MatX also uses a hybrid SRAM+HBM system that is super interesting (check out Austin&#8217;s interview with the CEO linked above). We discussed all this in a previous TWiC edition.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a4084f28-e6ad-4899-9360-21b39b1bdcd9&quot;,&quot;caption&quot;:&quot;In a way, TGIF because I&#8217;ve had a lot of changes to deal with this week. I&#8217;ll post an update about it at some point.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;&#127850; TWiC: AMD+Meta, New AI Chips, Citrini&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-27T14:51:05.918Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94feaa1e-cd75-4775-83e2-ec12d3ab31e9_685x249.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/twic-amdmeta-new-ai-chips-citrini&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188973578,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:29,&quot;comment_count&quot;:5,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>This is just the first of many such heterogeneous solutions that will emerge. The effectiveness of each solution will depend on how data movement is architected between various pieces of disparate hardware. Exciting times.</p><p>(via <a href="https://sambanova.ai/press/sambanova-announces-collaboration-with-intel-on-ai-solution">Sambanova</a>)</p><h4>Intel + Google partner on Xeon CPUs and IPUs</h4><p>In even more Intel news, Google and Intel partner to deploy Xeon CPUs and Infrastructure Processing Units (IPUs) for AI workloads. IPUs handle networking, storage, and security, much like Bluefield Data Processing Units (DPUs) from Nvidia. The main idea is to offload some admin functions from the main CPU so that it can focus on more critical AI related orchestration tasks.</p><p>In a way, this shows that not all CPUs are headed the ARM way, and x86 retains its stronghold in AI datacenter deployments. Google&#8217;s own ARM-based Axion CPUs were deployed for cloud infra and were focused on energy efficiency. In the AI era, raw power still matters and x86 Xeons still crush. If you want a head-to-head comparison CPUs in the market, check out our CPU yellow pages.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;29084fdf-6ada-4cc6-88c5-b8032694975c&quot;,&quot;caption&quot;:&quot;In last week&#8217;s article, we discussed why CPUs are critical in the era of agentic AI. The &#8220;operational burden&#8221; of the orchestrating AI tasks falls on the CPU, which affects GPU utilization rates and eventually TCO. Check out the whole post.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The AI Datacenter CPU Yellow Pages&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-24T13:21:40.703Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/754d30f1-1b70-4131-b083-869177b41afb_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/the-ai-datacenter-cpu-yellow-pages&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188873398,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:39,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>(via <a href="https://newsroom.intel.com/data-center/intel-google-deepen-collaboration-to-advance-ai-infrastructure">Intel</a>)</p><h4>Samsung Projects Record Q1 Profit, Driven by Memory</h4><p>Samsung is projecting a blowout quarter with a 755% YoY increase in profits of 57T won ($38B) compared to 6.7T won ($5B) in Q1 2025. 95% of the profits are from the memory sector, while the rest is from other sectors like mobile which are flat or declining. Revenue is up 68% YoY. It is anticipated that other major players like SK Hynix and Micron will post spectacular results in the coming quarter too.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hUEb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hUEb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 424w, https://substackcdn.com/image/fetch/$s_!hUEb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 848w, https://substackcdn.com/image/fetch/$s_!hUEb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 1272w, https://substackcdn.com/image/fetch/$s_!hUEb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hUEb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png" width="1420" height="1032" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1032,&quot;width&quot;:1420,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Samsung's  Q1 profit seen exceeding its entire 2025 annual profit as it benefits from AI boom.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Samsung's  Q1 profit seen exceeding its entire 2025 annual profit as it benefits from AI boom." title="Samsung's  Q1 profit seen exceeding its entire 2025 annual profit as it benefits from AI boom." srcset="https://substackcdn.com/image/fetch/$s_!hUEb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 424w, https://substackcdn.com/image/fetch/$s_!hUEb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 848w, https://substackcdn.com/image/fetch/$s_!hUEb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 1272w, https://substackcdn.com/image/fetch/$s_!hUEb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a3b0d2f-59a1-4d5e-891e-b5506502fec6_1420x1032.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Reuters</figcaption></figure></div><p>We are now deep in the memory supercycle since we first spoke about it on this newsletter six months ago.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e2fd1fd8-20db-407f-af31-f9080f5ffe98&quot;,&quot;caption&quot;:&quot;Today&#8217;s post is a discussion on a recent trend in semiconductor memory. If you&#8217;re new, start here! On Sundays, I write deep-dive posts on critical semiconductor technology for the AI-age in an accessible manner for paid subscribers.Upgrade to a paid subscription for all the&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How AI Demand Is Driving a Multi-Year Memory Supercycle&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-10-01T18:29:42.329Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15bda9d5-3a30-4b1c-8a7e-a1449b374c28_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/how-ai-demand-is-driving-a-memory-supercycle&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:174806500,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:43,&quot;comment_count&quot;:18,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>In <a href="https://youtu.be/oWQG207QPvk?si=15EKxErGSJJF-Tcw">this week&#8217;s Semi Doped podcast</a>, we discussed how spot DRAM prices are so high due to AI sucking up all the supply that consumer devices are left scrambling to get what silicon is left over. While this has led to memory price crashes in the past, we argued in an article to paid subscribers how this time is different because of a structural demand enforced by AI, regardless of algorithmic improvements such as TurboQuant.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;94a22237-747f-469a-9f22-7a39ad0769b2&quot;,&quot;caption&quot;:&quot;Last week on this newsletter, we discussed how TurboQuant works and how it signals another step along the path of algorithmic optimizations that is decoupled from the need for memory in AI systems. In the latest episode of Semi Doped, Austin (of ChipStrat&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why the DRAM Structural Floor in the AI Era Makes This Time Different&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-04-01T13:41:01.174Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!NCxX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/evaluating-memory-hog-cycles-in-the-ai-era&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192845993,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:34,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9d36f7ac-ff2f-42a2-b414-a2775ee7b0bc&quot;,&quot;caption&quot;:&quot;Google&#8217;s blog post on a quantization technique called TurboQuant caused a sharp selloff in memory stocks yesterday in the fear that KV-cache usage will drop significantly, and that memory will no longer be a concern. The actual method was published nearly a year ago, but Google&#8217;s resurfacing of its prior research is what is causing the jitters.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;TurboQuant: Inner Workings and Implications&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-03-26T07:43:23.712Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ToSn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c97aef9-a8a2-43a8-9eb3-3b6af92668eb_1250x561.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/turboquant-inner-workings-and-implications&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192180598,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:37,&quot;comment_count&quot;:2,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Samsung&#8217;s projected numbers in Q1 2026 are validation of growing memory demand and DRAM/NAND price hikes that are still rising in the coming quarters. But notably, a large portion of the ~8x profit increase is coming from the DRAM market, and not necessarily HBM. This is worrisome because while HBM provides a structural floor, spot prices are far more volatile and things often turn around quickly.</p><h4>Anthropic Hits $30B Revenue Run Rate, Overtakes OpenAI</h4><p>Anthropic revealed a $30B annualized revenue run rate, triple the ~$9B at end-2025 and up from ~$19B in March. They simultaneously announced a deal for 3.5GW of Google TPUs (via Broadcom) starting 2027, adding to ~1GW already secured.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/albrgr/status/2041288324464451617?s=20&quot;,&quot;full_text&quot;:&quot;&quot;,&quot;username&quot;:&quot;albrgr&quot;,&quot;name&quot;:&quot;Alexander Berger&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1104472754697338880/PAwSOTHf_normal.png&quot;,&quot;date&quot;:&quot;2026-04-06T22:53:56.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HFQc5X6bQAAT3K1.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/FhVUSNgK5h&quot;}],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;Our run-rate revenue has surpassed $30 billion, up from $9 billion at the end of 2025, as demand for Claude continues to accelerate. This partnership gives us the compute to keep pace.\n\nRead more: https://t.co/XgSjL0And7&quot;,&quot;username&quot;:&quot;AnthropicAI&quot;,&quot;name&quot;:&quot;Anthropic&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1798110641414443008/XP8gyBaY_normal.jpg&quot;},&quot;reply_count&quot;:26,&quot;retweet_count&quot;:79,&quot;like_count&quot;:1266,&quot;impression_count&quot;:316227,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p><strong>30x ARR growth in 16 months makes Anthropic the fastest-scaling enterprise company in history</strong>. But the infrastructure story is equally significant: they're hedging across AWS Trainium, Google TPUs, and NVIDIA GPUs &#8212; a diversified compute strategy. This is a big plus for Broadcom which now has a &gt;$100B AI pipeline (possible $200B?). In a weird turn of events, Anthropic is <a href="https://www.reuters.com/business/anthropic-weighs-building-it-own-ai-chips-sources-say-2026-04-09/">rumored to start developing their own chips</a>. What else are they gonna do with the extra change when their ARR hits $100B next month? &#129313; (or maybe not?)</p><h4>Arm&#8217;s Strategy of Selling AGI CPUs to China</h4><p>This bit of news is slightly over a week old, but I feel it flew under the radar. Arm&#8217;s CEO Rene Haas said in an interview to the ChinaDaily that Arm intends to sell its finished AGI CPU to Chinese customers. US/UK export controls restrict Arm from licensing its Neoverse V3 cores, but says nothing about selling finished chips instead of just IP. This is a key distinction: licensing IP would let Chinese firms design and produce their own chips indefinitely; buying finished CPUs from Arm does not transfer that design capability.</p><p>Selling CPUs in China unlocks a huge market that previously did not exist for Arm. The AGI CPU gives Chinese buyers immediate access to a modern, high-core-count, AI-optimized Arm server processor for orchestrating large AI/agentic workloads. The key announcement to now track is Arm&#8217;s first Chinese CPU customer, whoever that is.</p><p>(via <a href="https://www.chinadaily.com.cn/a/202603/31/WS69cb5aaea310d6866eb40eae.html">ChinaDaily</a>, <a href="https://www.digitimes.com/news/a20260407PD234/arm-agi-cpu-ip-licensing-china.html">DigiTimes</a>)</p><div><hr></div><p>Have a great weekend!</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-intel-with-terafabsambanovagoogle?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>This post is public so feel free to share it.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-intel-with-terafabsambanovagoogle?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/p/twic-intel-with-terafabsambanovagoogle?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[MicroLEDs vs. Lasers: The Linewidth Tradeoff]]></title><description><![CDATA[Why spectral purity defines the speed limits of datacenter optics]]></description><link>https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Tue, 07 Apr 2026 16:40:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a8d51b49-98b0-4fab-8148-53cb04517219_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There has been considerable chatter about Lumentum&#8217;s lasers and their narrow linewidth, contrasted with questions about the viability of microLEDs in datacenters. I&#8217;ve been meaning to write about microLEDs for quite a while; we&#8217;ll cover some general aspects here. Since MicroLEDs are a big topic, this is best handled as a sequence of posts that build up the whole picture over time. We will make constant comparisons to lasers since it is the incumbent optical technology. Feel free to ask follow up questions in the comments at any time - it will help guide future posts.</p><p>A useful starting point to understand the lasers versus LEDs debate is to ask what linewidth is, why it matters, and how lasers and LEDs broadly compare. Let&#8217;s first describe the technology landscape.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>This is a free post, but is representative of the kind of deep-dives subscribers get. Sign up for free or upgrade to paid for more follow-ups on this topic. Oh, and expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>You can download this post in epub or pdf formats for free using the button below. If you are not a paid subscriber, and would like to purchase individual articles, please see the catalog <a href="https://www.semiexponent.com/products">here</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/microleds-vs-lasers-the-linewidth-tradeoff&quot;,&quot;text&quot;:&quot;Get as free ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/microleds-vs-lasers-the-linewidth-tradeoff"><span>Get as free ebook</span></a></p><div><hr></div><h3>Lumentum, Coherent, and DFB Lasers</h3><p>Distributed Feedback (DFB) lasers often made from Indium Phosphide (InP) are known for their narrow linewidth, where Lumentum holds dominance in 200G/lane <em>electro-absorption</em> modulated lasers (EMLs). Their moat comes both from having a great laser source and a co-optimized modulator that encodes high-speed electrical signals into light. The performance gap in EMLs is seemingly wide enough (specs not published widely) that their competitor, Coherent, actually buys these EMLs from them. This puts Lumentum in the driver&#8217;s seat for EMLs in the 1.6T optical era involving pluggable transceivers.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;613d81f5-09fe-4561-bdeb-fcb53f44c4bb&quot;,&quot;caption&quot;:&quot;Lumentum ($LITE) delivered a stellar Q2 earnings report with revenue being up 65% YoY. We won&#8217;t get into all the numbers here, but you can hear CEO Michael Hurlston brimming with confidence:&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Lumentum: Laser Demand, OCS, CPO and Optical Scale-Up&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-06T01:59:33.076Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Yw2B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fe588bb-9acc-4bc1-aadc-e245fe2512ef_600x338.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/lumentum-laser-demand-ocs-cpo-optical-scaleup&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:186996586,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:42,&quot;comment_count&quot;:1,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>A DFB laser for co-packaged optics (CPO) is a different beast and is a space where Lumentum has serious competition. CPO requires a constant source of light (called continuous-wave or CW) because the modulation is handled by the silicon photonics chip using a ring or Mach-Zehnder modulator. Here, a constant output power of 300-400mW at elevated temperatures of 50-70&#176;C is the key differentiator. This level of output power is needed because the light source is outside the rack, and connected to the SiPho chip via a long polarization maintaining optical fiber. We&#8217;ve discussed this earlier.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;17ef7fff-76c0-417b-af1c-b7032cb601cf&quot;,&quot;caption&quot;:&quot;Welcome to a &#128274; subscriber-only deep-dive edition &#128274; of my weekly newsletter. Each week, I help investors, professionals and students stay up-to-date on complex topics, and navigate the semiconductor industry.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why Co-Packaged Optics Uses External Lasers Instead of Integrated Sources&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-11-09T18:29:42.042Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!-jgG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4567942b-3b5f-4746-8241-33b4cafbc8ca_900x600.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/why-cpo-uses-external-lasers&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:178423061,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:36,&quot;comment_count&quot;:3,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3>Laser Linewidth, RIN, and Power</h3><p>Lumentum&#8217;s <a href="https://www.lumentum.com/en/products/data-center/cw-lasers/uhp-lasers-cpo">ELS laser for CPO</a> has an advertised output power of 350 mW at 50&#176;C, a line width of &lt;500 kHz, for a 1311 nm laser. Coherent&#8217;s <a href="https://www.coherent.com/news/press-releases/coherent-samples-low-noise-400mw-cw-lasers">equivalent laser</a> has a reported output power of 400mW at 50&#176;C and a linewidth of &lt;200 kHz. Linewidth is a measure of how pure the light source is, and is quantified by the Full-Width at Half-Maximum (FWHM). Take the maximum laser power, cut it by half and measure the spreading of the laser frequency/wavelength. The smaller this number, the more pure the laser output on the spectrum.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nsFS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nsFS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 424w, https://substackcdn.com/image/fetch/$s_!nsFS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 848w, https://substackcdn.com/image/fetch/$s_!nsFS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 1272w, https://substackcdn.com/image/fetch/$s_!nsFS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nsFS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png" width="361" height="219" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:219,&quot;width&quot;:361,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nsFS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 424w, https://substackcdn.com/image/fetch/$s_!nsFS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 848w, https://substackcdn.com/image/fetch/$s_!nsFS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 1272w, https://substackcdn.com/image/fetch/$s_!nsFS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff19c37ec-de32-4474-a9a1-03fae62fadc8_361x219.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Source: Rami Arieli, &#8220;<a href="https://perg.phys.ksu.edu/vqm/laserweb/ch-5/F5s1p3.htm">The Laser Adventure</a>&#8221;</figcaption></figure></div><p>Anything under 1 MHz in linewidth is exceptionally pure light, really. A 1311nm laser translates to about 230 THz in free space which is really narrow compared to the center frequency. The deviation in wavelength is so small that laser engineers prefer frequency units to represent this. This is the magic of DFB lasers.</p><p>Lasers from both companies have a Relative Intensity Noise (RIN) of -145 dB or thereabouts. RIN is a measure of how stable the output power is. Think of a bad laser with high RIN as flickering too much. Lumentum is not much better in RIN; the frequently cited -156 dB RIN for Lumentum is <a href="https://www.lumentum.com/sites/default/files/2025-12/high_power_cw_laser_for_co-packaged_optics_2022.pdf">a hero number from a CLEO 2022 paper</a>. Their production RIN is in the same ballpark as Coherent. Thus, while Lumentum has a clear advantage in EMLs for pluggables, the race to CPO lasers is highly contested (there are other players but their laser specs are not publicized).</p><p>Regardless of lasers for pluggables or CPO, the key takeaway is that DFB lasers with their narrow linewidth are perfectly suited for optical communication, and have been the mainstay for long haul links. They allow encoding of fast data signals and have long distance reach. Here is the critical question: is DFB laser the right choice of technology for short-to-medium distance optical links in a datacenter?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ISXN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ISXN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 424w, https://substackcdn.com/image/fetch/$s_!ISXN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 848w, https://substackcdn.com/image/fetch/$s_!ISXN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 1272w, https://substackcdn.com/image/fetch/$s_!ISXN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ISXN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ISXN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 424w, https://substackcdn.com/image/fetch/$s_!ISXN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 848w, https://substackcdn.com/image/fetch/$s_!ISXN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 1272w, https://substackcdn.com/image/fetch/$s_!ISXN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab61de3-a53a-4c2e-8e4f-9555a96e32a8_2048x1150.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Avicena thinks DFBs are like using an airplane to get to the grocery store. Source: ISSCC 2025 Forum 1.</figcaption></figure></div><h3>The Case for MicroLEDs: Benefits and Challenges</h3><p>There has been a slew of activity around the use of Micro Light Emitting Diodes (LEDs) for data transmission. <a href="https://investors.credosemi.com/news-events/news/news-details/2025/Credo-to-Acquire-Hyperlume-Inc-/default.aspx">Credo moved to acquire Hyperlume</a> last year, Microsoft published results from <a href="https://www.microsoft.com/en-us/research/publication/mosaic-breaking-the-optics-versus-copper-trade-off-with-a-wide-and-slow-architecture-and-microleds/">Mosaic</a>, and Marvell most recently is <a href="https://investor.marvell.com/news-events/press-releases/detail/1012/marvell-and-mojo-vision-collaborate-to-develop-next-generation-high-density-micro-led-connectivity-solutions">collaborating with Mojo Vision</a> on microLEDs. We would be remiss if we did not mention <a href="https://avicena.tech/">Avicena</a>, who has been pioneering MicroLEDs for AI interconnects for a long time.</p><p>MicroLEDs are just really small versions (10-50 microns in size) of your regular lightbulbs, and are built with Gallium Nitride (GaN) to emit blue light. Just like you don&#8217;t get a laser light show every time you walk in your room and turn the light on, microLEDs do not emit a narrow pencil beam of light like lasers. They instead produce light in a wide angle and need to be focused with a microlens. The light is not spectrally pure like lasers either. Blue GaN microLEDs have a wavelength of about 450nm, and linewidths of 10-15nm. Compared to lasers, this is terrible linewidth because it accounts for about 3% around the center wavelength.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wyQA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wyQA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 424w, https://substackcdn.com/image/fetch/$s_!wyQA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 848w, https://substackcdn.com/image/fetch/$s_!wyQA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 1272w, https://substackcdn.com/image/fetch/$s_!wyQA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wyQA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png" width="731" height="361" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:361,&quot;width&quot;:731,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wyQA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 424w, https://substackcdn.com/image/fetch/$s_!wyQA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 848w, https://substackcdn.com/image/fetch/$s_!wyQA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 1272w, https://substackcdn.com/image/fetch/$s_!wyQA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3f264f-574e-43aa-9f6a-c8affca3dd3c_731x361.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Microsoft</figcaption></figure></div><p>The poor linewidth in comparison to lasers imposes restrictions on data rate that a microLED optical link can support. The solution around this is to use an array of microLEDs in a &#8220;wide-but-slow&#8221; approach &#8211; just like each data lane in high bandwidth memory (HBM) is slower than what is possible in GDDR, but the overall throughput is higher. There are a few additional benefits of using microLEDs:</p><ol><li><p>Lower power than DFB laser optics (caveat to follow), but better reach than copper (can possibly do &gt;10m at Tbps aggregate speeds)</p></li><li><p>Better reliability than lasers because LEDs are structurally simpler, and relatively temperature insensitive. A wide-but-slow approach means that there could be redundant lanes for failover.</p></li><li><p>Linear path to scaling to higher speeds: increase per-lane speed (harder) or increase number of lanes (easier).</p></li></ol><p>While microLED cables would work directly with existing pluggable infrastructure, there are two important requirements that makes things more complex:</p><p><strong>Focusing optics</strong></p><p>The broad angle emission from a microLED needs to be focussed with microlenses in order to couple them into an array of optical fibers that carry multiple parallel lanes of data. In reality, this is not difficult because microlenses are also implemented as an array that fits directly over the microLED array. DFB laser sources have specialized optics to couple light into fibers too.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i1eQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i1eQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 424w, https://substackcdn.com/image/fetch/$s_!i1eQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 848w, https://substackcdn.com/image/fetch/$s_!i1eQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 1272w, https://substackcdn.com/image/fetch/$s_!i1eQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i1eQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png" width="655" height="384" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:384,&quot;width&quot;:655,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i1eQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 424w, https://substackcdn.com/image/fetch/$s_!i1eQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 848w, https://substackcdn.com/image/fetch/$s_!i1eQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 1272w, https://substackcdn.com/image/fetch/$s_!i1eQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7c8baa3-98d6-4f70-b9c6-d265d53a6a2a_655x384.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Lanaotek</figcaption></figure></div><p><strong>Electrical Gearboxing</strong></p><p>You need an &#8220;electrical gearboxing&#8221; function that converts the 112G/224G narrow-but-fast SerDes lanes of the host processor into a 20 x 20 microLED array that carries slow-but-wide signals at lower data rates. This chip is implemented as a CMOS ASIC that handles the electrical gearboxing. For example, the Avicena LightBundle&#8482; evaluation kit launched at OFC 2026 uses a specialized 16nm finFET CMOS ASIC. The only way a gearbox chip can be avoided is to design the host processor with a UCIe interface that natively implements a wide-but-slow approach (like a custom base-die used in HBM).</p><p><a href="https://www.microsoft.com/en-us/research/publication/mosaic-breaking-the-optics-versus-copper-trade-off-with-a-wide-and-slow-architecture-and-microleds/">Microsoft&#8217;s paper</a> on microLEDs shows that the gearbox functionality only consumes a small portion of the power (0.4W) it would otherwise take with an optical connection that requires DSP/CDR/FEC (3.5W). The argument in the paper is that it is a simple gearboxing function, and thus less power hungry. If per-lane speeds increase in future generations of microLED interconnect, then CDR/FEC functions will be required and power usage will increase, closing the power efficiency gap between lasers and microLEDs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OKBn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OKBn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 424w, https://substackcdn.com/image/fetch/$s_!OKBn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 848w, https://substackcdn.com/image/fetch/$s_!OKBn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 1272w, https://substackcdn.com/image/fetch/$s_!OKBn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OKBn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png" width="753" height="476" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:476,&quot;width&quot;:753,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OKBn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 424w, https://substackcdn.com/image/fetch/$s_!OKBn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 848w, https://substackcdn.com/image/fetch/$s_!OKBn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 1272w, https://substackcdn.com/image/fetch/$s_!OKBn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F684031f7-f3ec-4e7c-8100-e296cdccb32e_753x476.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Microsoft Mosaic</figcaption></figure></div><h3>Why linewidth matters for data rates</h3><p>Let&#8217;s address why microLEDs are only capable of 2-4 Gbps data rates today in pre-production, while DFB EMLs can support 200 Gbps or more.</p><p>The fundamental concept that underlies this limitation is chromatic dispersion in an optical fiber. When an optic signal travels through a fiber, different wavelengths travel at different speeds due to the wavelength-dependent refractive index of glass in the fiber. The resulting pulse exiting the fiber spreads out in time because different wavelengths arrive at different times.</p><p>This spreading determines the maximum speed of data transmission that is possible because once neighboring pulses start to overlap, it is hard to tell them apart and detection errors start to occur.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Giqu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Giqu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 424w, https://substackcdn.com/image/fetch/$s_!Giqu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 848w, https://substackcdn.com/image/fetch/$s_!Giqu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 1272w, https://substackcdn.com/image/fetch/$s_!Giqu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Giqu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png" width="800" height="298" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:298,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Giqu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 424w, https://substackcdn.com/image/fetch/$s_!Giqu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 848w, https://substackcdn.com/image/fetch/$s_!Giqu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 1272w, https://substackcdn.com/image/fetch/$s_!Giqu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb642f176-f4e7-4534-96ae-d6898bf2f9f4_800x298.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Mohamed Abolfotouh on Linkedin</figcaption></figure></div><p>The broadening of a pulse by a time dT (ps) in an optical fiber can be calculated as</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mJv0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mJv0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 424w, https://substackcdn.com/image/fetch/$s_!mJv0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 848w, https://substackcdn.com/image/fetch/$s_!mJv0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 1272w, https://substackcdn.com/image/fetch/$s_!mJv0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mJv0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png" width="239" height="57.74006116207951" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:79,&quot;width&quot;:327,&quot;resizeWidth&quot;:239,&quot;bytes&quot;:4499,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/193481292?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mJv0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 424w, https://substackcdn.com/image/fetch/$s_!mJv0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 848w, https://substackcdn.com/image/fetch/$s_!mJv0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 1272w, https://substackcdn.com/image/fetch/$s_!mJv0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80f5e2b5-9aa0-45d0-bafd-8b48050c3475_327x79.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Where D is the dispersion coefficient in ps/(nm&#183;km), L is the link length in kilometers, and  is the linewidth of the light source. Assuming Gaussian pulses are being transmitted as bits, the data rate Bmax is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q7Dv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 424w, https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 848w, https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 1272w, https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png" width="198" height="51.280575539568346" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:72,&quot;width&quot;:278,&quot;resizeWidth&quot;:198,&quot;bytes&quot;:5519,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/193481292?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 424w, https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 848w, https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 1272w, https://substackcdn.com/image/fetch/$s_!Q7Dv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0170f2c-475a-4b1f-aee5-d39ba3de9eb0_278x72.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Let&#8217;s try out some numbers here.</p><p><strong>With a microLED</strong></p><p>A standard legacy multimode optical fiber (such as OM3 or OM4) operating at 850 nm exhibits a material dispersion parameter of approximately 100 ps/nm/km.n For an 850 nm microLED with a spectral linewidth of 40 nm (4.7% of center wavelength), operating over a link length of 0.010 km (10 meters), the temporal pulse spread is calculated as: 100&#215;0.01&#215;40, or 40ps. <strong>Maximum data rate supported = 0.44/40ps = 11 Gbps/lane</strong>. These speeds have been demonstrated in research papers with microLEDs, but production numbers are slower at 2-4 Gbps/lane.</p><p><strong>With a DFB laser</strong></p><p>Let us assume the same wavelength, and same link distance, but a linewidth of 0.001nm. The temporal pulse spread is: 100&#215;0.01&#215;0.001 = 0.001ps. Maximum data rate supported = 0.44/0.001 = 440 Gbps/lane.</p><p><strong>This demonstrates why linewidth makes a big difference in how much data rate is possible with a given type of light source in an optical communication system.</strong></p><p>There is a lot more to discuss about microLEDs before we can declare microLEDs fit/unfit for datacenter needs: coupling efficiency into fibers, beam shapes, reliability assessments, and the overall supplier/customer landscape that is still rapidly evolving.</p><p>Industry sources tell me that hyperscalers are open to exploring microLEDs for future interconnect needs, but the microLED industry for datacenter interconnects is still young. An additional aspect worth exploring is how VCSELs compare as an alternative to microLEDs &#8211; something that <a href="https://investor.lumentum.com/financial-news-releases/news-details/2026/Lumentum-Showcases-Breakthrough-Optical-Scale-Up-Demonstration-at-OFC-2026-Using-VCSEL-Technology/default.aspx">Lumentum is actively exploring with 1060nm VCSELs</a>. So many future post options!</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>This post is public so feel free to share it. 5 shares gets you a free month of paid subscriber benefits.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/p/microleds-vs-lasers-the-linewidth-tradeoff?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div>]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: Samsung SiPho, Mobile Demand, Nvidia+Marvell, Intel Fab, Claude Leak, ++]]></title><description><![CDATA[This Week in Chips: Freshy chip news from the semi galaxy, and neighboring dwarf stars]]></description><link>https://www.viksnewsletter.com/p/twic-samsung-sipho-mobile-demand</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-samsung-sipho-mobile-demand</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 03 Apr 2026 11:31:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!v_1r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The semi world is still recovering from the &#8220;TurboQuant setback&#8221; even while memory demand looks stronger than ever. This week&#8217;s deep dive covers the exact reasons why &#8220;this time its different.&#8221; Several analysts including <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;FundaAI&quot;,&quot;id&quot;:282947998,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11f71bef-290f-4a13-8637-abb673cea09f_1317x1317.jpeg&quot;,&quot;uuid&quot;:&quot;2717cba6-b351-4612-9225-decfce9b8dc7&quot;}" data-component-name="MentionToDOM"></span> released reports this week with similar conclusions &#8212; that the 2026 memory pull back is more of a buying opportunity and that memory fundamentals look strong. </p><p>Here&#8217;s the post if you haven&#8217;t gotten to it yet.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;21bbacd2-00a2-4dd8-86c7-b46660552b95&quot;,&quot;caption&quot;:&quot;Last week on this newsletter, we discussed how TurboQuant works and how it signals another step along the path of algorithmic optimizations that is decoupled from the need for memory in AI systems. In the latest episode of Semi Doped, Austin (of ChipStrat&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why the DRAM Structural Floor in the AI Era Makes This Time Different&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-04-01T13:41:01.174Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!NCxX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/evaluating-memory-hog-cycles-in-the-ai-era&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192845993,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:28,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>In last week&#8217;s Semi Doped podcast, we covered TurboQuant in some detail, and argued whether Arm&#8217;s AGI CPU will actually displace x86 in the long run.</p><ul><li><p><a href="https://youtu.be/Npi-TLWCz3o?si=oUYB1rSe27Vb9Tkd">ARM AGI CPU has entered the chat, TurboQuant thrashes memory stocks</a></p></li></ul><p>Now, on to the news.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>News roundups are free, but deep-dives are where the real insights live. Upgrade to a paid sub and access 100+ from the archives!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>Samsung Enters SiPho, Wants to Connect HBM with Light</h4><p>Samsung unveiled its SiPho Foundry platform at OFC 2026 that is production-ready, with a PDK, and ready for manufacturing on a 300 mm wafer process. The main pitch here is Samsung&#8217;s <strong>vertically integrated memory capabilities with HBM and silicon photonics under one roof, a turnkey bundle that TSMC cannot match</strong>. Early results are expected in the 2028-29 time frame, and the industry places Samsung roughly 3 years behind TSMC, primarily due to ecosystem, design enablement, and customer trust. Samsung&#8217;s Semiconductor division has already designated its silicon photonics processes as &#8220;<strong>I-CubeSo</strong>&#8221; and &#8220;<strong>I-CubeEo</strong>&#8221; (try saying that 3 times in a row) and is allegedly working closely with Broadcom and Marvell.</p><p>(via <a href="https://www.thelec.net/news/articleView.html?idxno=6189">The Elec</a>, <a href="https://www.digitimes.com/news/a20260330PD228/silicon-photonics-samsung-tsmc-2028.html">DigiTimes</a>)</p><h4>Low/mid tier mobile is driving demand destruction; PC demand down</h4><p>MediaTek and Qualcomm are both <a href="https://wccftech.com/mediatek-qualcomm-reportedly-slash-shipments-of-4nm-mobile-chip-by-15-20-million-units-as-demand-destruction-arrives/">cutting 4nm chip shipments</a> by 15-20 million units combined because memory has gotten so expensive that <strong>low-to-mid-tier smartphones make no sense to build</strong> because their BOM is so high. Whether premium tier phones are insulated is questionable, especially when Apple is supposedly<a href="https://wccftech.com/apple-is-reportedly-buying-up-all-available-mobile-dram-at-very-high-prices-to-starve-out-its-competitors/"> buying up all available DRAM at high prices</a> to starve out Android competitors like Samsung (ironic since Samsung is also Apple's largest DRAM supplier).</p><p>Last year, more than 360 million smartphones shipped below $150, representing a substantial share of global volumes, with that proportion rising in key emerging markets like Africa and India. The push for greater economies of scale will drive mergers among the smaller players. Omdia cites <a href="https://c.realme.com/global/post-details/2009013309261570048">Realme reintegrating under OPPO's umbrella</a> as early signs of consolidation as vendors seek greater scale to manage rising costs. <strong>The consensus is that this is a structural reallocation of memory capacity toward AI/HBM at the expense of consumer TAM.</strong></p><h4>Nvidia joins forces with Marvell via NVLink Fusion</h4><p>Nvidia and Marvell announced a partnership that allows Marvell&#8217;s custom ASIC silicon to run on Nvidia infrastructure via NVLink fusion. Market sentiment is overwhelmingly positive on this news, and Marvell shares surged 13%. The deal is viewed as a defensive masterstroke by Jensen that signals that <strong>if Nvidia can&#8217;t beat the custom chip movement, they can at least own the fabric ASICs run on</strong>. Clear <strong>winner is Marvell here</strong> because they get the $2B investment cookie that Nvidia has been handing out to everybody (LITE, COHR, NBIS&#8230;) and also &#8220;preferred partner&#8221; status with Nvidia. This puts <strong>Broadcom in a more challenging position</strong> as the primary backer of UA-Link standard over NVLink fusion.</p><p>(via <a href="https://nvidianews.nvidia.com/news/nvidia-ai-ecosystem-expands-as-marvell-joins-forces-through-nvlink-fusion">Nvidia News</a>)</p><h4>Intel buys back 49% share of its Fab 34 Ireland fab</h4><p>Intel said it would spend $14.2 billion (cash+debt) to buy back the 49% stake it sold to Apollo Global in its Ireland manufacturing facility. The fab produces leading-edge nodes like Intel 3 and 4, including its Core Ultra and Xeon 6 processors. The price paid is $3B more than they sold it for two years ago, and <strong>the market is reading this as Intel&#8217;s conviction in their foundry strategy by putting real money behind the chip manufacturing</strong>. The sentiment is strongly positive and is interpreted as a sign of confidence that Intel&#8217;s foundry turnaround is actually happening. Investors will closely scrutinize whether the strategic benefits are strong enough to justify the added cost and financing burden.</p><p>(via <a href="https://newsroom.intel.com/corporate/intel-repurchase-49-equity-interest-ireland-fab-joint-venture">Intel</a>)</p><h4>OpenAI closes record breaking $122B funding round </h4><p>Total valuation is $852B. Seriously, that is a lot of money. OpenAI is pulling $2B/mo in revenue, and still continuously losing money. Only profitable in 2030. &#128064;</p><p>(via <a href="https://www.theguardian.com/technology/2026/mar/31/openai-raises-122-billion-ai-boom">The Guardian</a>)</p><h4>Anthropic&#8217;s Claude has a wardrobe malfunction </h4><p>Claude&#8217;s source code leaked via npm repository, and yeah, we can see everything underneath &#128584;, including a digital pet and an always-on agent. The funniest coverage on this is from <a href="https://www.youtube.com/watch?v=GdgRpiQRsis&amp;t=458s">ThePrimeagen on YouTube</a>. Seriously watch it lol.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v_1r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v_1r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 424w, https://substackcdn.com/image/fetch/$s_!v_1r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 848w, https://substackcdn.com/image/fetch/$s_!v_1r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 1272w, https://substackcdn.com/image/fetch/$s_!v_1r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v_1r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png" width="1337" height="748" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:748,&quot;width&quot;:1337,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:659687,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/192686774?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v_1r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 424w, https://substackcdn.com/image/fetch/$s_!v_1r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 848w, https://substackcdn.com/image/fetch/$s_!v_1r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 1272w, https://substackcdn.com/image/fetch/$s_!v_1r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aafd2f6-7acd-4168-99d9-40252635ef30_1337x748.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(via literally everywhere)</p><h4>Your latest co-worker is now an AI employee </h4><p>This is a $2,000/mo AI from a company called <a href="https://www.kuse.ai/">Kuse AI</a> that is a virtual AI employee who never sleeps, eats, or goes to happy hour. There are 2,000+ companies who have signed up for the wait list. If companies need to pay 3-5x more for a human who demands work-life balance, an OpenClaw agent that the company calls Junior seems to be a better bet. All well and good, but you can&#8217;t choke an AI neck when things go wrong.</p><p>(via <a href="https://www.bloomberg.com/news/articles/2026-04-02/meet-the-new-ai-coworker-who-won-t-stop-snitching-to-your-boss">Bloomberg</a>)</p><h4>Atlas Studio: A Foundation Model for Electromagnetics </h4><p>This is a physics foundational model that is useful for a lot of RF design work that would take too long with a simulator. <a href="https://www.notboring.co/p/electromagnetism-secretly-runs-the">Not Boring</a> has a long piece on it. Long time readers would remember <a href="https://open.substack.com/pub/viksnewsletter/p/dallem-in-30-hours?utm_campaign=post-expanded-share&amp;utm_medium=web">a piece I wrote</a> about crazy RF designs from Prof. Sengupta&#8217;s group (Princeton) last year. Well, Prof. Sengupta is now <a href="https://www.linkedin.com/posts/kaushik-sengupta-4960a57_several-of-you-messaged-me-about-blatantly-share-7445099437208584193-6KMe?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAIFTBQBG7oyp0HtapsRUeLcAS1LULbQq6g">pushing back</a> on the &#8216;radicalness&#8217; of the foundation EM model by comparing it to someone claiming that they invented the transistor.</p><p>(h/t <a href="https://x.com/SinghJyotirmai">@SinghJyotirmai</a>, via <a href="https://www.arenaphysica.com/publications/rf-studio">Arena Physica</a>)</p><h4>PrismML: 1-bit Bonsai 8B model fits into 1.15GB </h4><p>With the TurboQuant meltdown in place, I wonder if we should worry about dramatically compressed models like PrismML. If this delivers any level of useful intelligence, it could unlock a whole new world of edge inference. Its open-source and extremely light weight for an 8B parameter model. I think I&#8217;ll run it on my Mac M3 pro this weekend and see how I like it. I already tried it <a href="https://x.com/vikramskr/status/2039594470451331196?s=20">on my iPhone</a>. It could also serve as a local fallback model for OpenClaw.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i4kd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i4kd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 424w, https://substackcdn.com/image/fetch/$s_!i4kd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 848w, https://substackcdn.com/image/fetch/$s_!i4kd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 1272w, https://substackcdn.com/image/fetch/$s_!i4kd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i4kd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png" width="840" height="445" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:445,&quot;width&quot;:840,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i4kd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 424w, https://substackcdn.com/image/fetch/$s_!i4kd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 848w, https://substackcdn.com/image/fetch/$s_!i4kd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 1272w, https://substackcdn.com/image/fetch/$s_!i4kd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe43f1472-c76b-4c3f-a476-188e6e00873d_840x445.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(via <a href="https://prismml.com/">PrismML</a>)</p><div><hr></div><p>Have a great weekend!</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-samsung-sipho-mobile-demand?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>This post is public so feel free to share it.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-samsung-sipho-mobile-demand?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/p/twic-samsung-sipho-mobile-demand?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Why the DRAM Structural Floor in the AI Era Makes This Time Different]]></title><description><![CDATA[Why AI demand adds a structural floor that transforms the typical memory cycle into a strong demand signal that is here to stay.]]></description><link>https://www.viksnewsletter.com/p/evaluating-memory-hog-cycles-in-the-ai-era</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/evaluating-memory-hog-cycles-in-the-ai-era</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Wed, 01 Apr 2026 13:41:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NCxX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week on this newsletter, we discussed <a href="https://www.viksnewsletter.com/p/turboquant-inner-workings-and-implications?r=222kot&amp;utm_campaign=post&amp;utm_medium=web">how TurboQuant works</a> and how it signals another step along the path of algorithmic optimizations that is decoupled from the need for memory in AI systems. In the <a href="https://youtu.be/Npi-TLWCz3o?si=P6NZ7tQVqHqvJ3K0">latest episode of Semi Doped</a>, Austin (of <a href="https://substack.com/@chipstrat">ChipStrat</a>) and I wondered why memory investors are the most skittish of the lot when it comes to innovation. It turns out there is a good reason: repeated boom-bust cycles - some of which we will visit in this post.</p><p>Still, worries of TurboQuant wrecking the memory industry have not abated. To the contrary, a new fear has been unlocked: the peak of the &#8220;hog cycle&#8221; often seen by seasoned investors as a strong signal of a memory peak, and impending doom.</p><p>The counterargument is that &#8220;this time is different,&#8221; driven primarily by the insatiable demand for memory in AI deployments, and the token explosion in the era of agentic AI that is <a href="https://x.com/GavinSBaker/status/2038714072342925561?s=20">already here</a>. The real question is whether the demand is high enough to serve as a fallback when the consumer segment finally &#8220;says no&#8221; to the extraordinary price hikes and cancels deployment of low/mid tier devices, and/or downgrades product specs to accommodate what memory is reasonably available.</p><p>In this article, we will investigate these arguments in greater detail, make reasonable guesses and inferences where possible, and attempt to have a balanced, realistic stance based on the current state of the industry.</p><p>Broadly, we will follow this line of inquiry:</p><blockquote><p>The classic memory hog cycle isn&#8217;t dead &#8212; but AI has added a multi-layered structural floor that mutes the traditional &#8216;PC/smartphone makers say no&#8217; warning signal and turns the 2026 pullback into a buying opportunity rather than the start of another 2019/2022-style bust.</p></blockquote><p>Here is what we cover:</p><ul><li><p>Hog Cycles as a Leading Memory Indicator</p></li><li><p>2026: PC/Smartphone Industry Says &#8216;No&#8217;</p></li><li><p>&#8220;This Time is Different&#8221; &#8211; But is it?</p></li><li><p>&#128274;Understanding the New AI Realities </p></li><li><p>&#128274;Memory as the Bottleneck</p></li><li><p>&#128274;Revisiting the Thesis and Risks</p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Upgrade to a paid subscription to get deep insights into the semiconductor industry + Instant access to 100+ articles in archive.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you are not a paid subscriber, you can purchase just this article using the button below. You can find the whole catalog of articles for purchase <a href="https://www.semiexponent.com/products">at this link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/evaluating-memory-hog-cycles-in-the-ai-era&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/evaluating-memory-hog-cycles-in-the-ai-era"><span>Buy the ebook</span></a></p><p><em>Note: This article does not constitute investment advice. Do your own research.</em></p><div><hr></div><h3>Hog Cycles as a Leading Memory Indicator</h3><p>The classical hog cycle works like this: High pork prices lead farmers to breed more pigs. By the time the pigs mature, the market is flooded, prices crash, farmers stop breeding, and a shortage ensues and causes prices to spike again.</p><p>This applies directly to memory. A new technology (smartphones, cloud computing, AI) drives massive memory demand and prices soar. Memory makers enjoy record margins and heavily invest in expanding fab capacity. It takes years for that new capacity to come online. During this time, buyers double-order to secure supply, artificially inflating demand. The new fabs finally start producing exactly when downstream demand begins to cool. Supply floods the market, inventory piles up, and prices crash below the cost of production.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HF9a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HF9a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!HF9a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!HF9a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!HF9a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HF9a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ff99a2e-e3ba-49bb-a6f9-dd81896be9da_1600x873.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HF9a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!HF9a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!HF9a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!HF9a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b74adf9-dab1-44b0-b656-875dba8467d8_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="http://viksnewsletter.com">ViksNewsletter.com</a> via Gemini.</figcaption></figure></div><p>This cycle has specifically played out at-least twice in the last decade.</p><ul><li><p><strong>2019 Cloud Burst</strong>: Datacenter boom for cloud computing and crypto mining lead to excess inventory when everybody suddenly stopped buying. DRAM prices plummeted by 50%.</p></li><li><p><strong>2022 Covid Hangover</strong>: Covid-era shutdowns and the spike in PC/phone sales for remote work left OEMs with excess inventory that suddenly had no demand. The carnage was real: DRAM spot prices dropped &gt;50%, server DRAM revenues collapsed, memory stocks fell 30-50%, and NAND sold for less than it took to make.</p></li></ul><h3>2026: PC/Smartphone Industry Says &#8216;No&#8217;</h3><p>It seems like the memory downturn in the past week is one for the history books, largely driven by fears around TurboQuant, and more importantly another catalyst worth discussing: the PC/smartphone industry. Shares of Micron, SanDisk, SK Hynix, and Samsung are all down 20-30% from their recent peak. I&#8217;m guessing we will talk about this for years to come.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NCxX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NCxX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 424w, https://substackcdn.com/image/fetch/$s_!NCxX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 848w, https://substackcdn.com/image/fetch/$s_!NCxX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 1272w, https://substackcdn.com/image/fetch/$s_!NCxX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NCxX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png" width="717" height="696" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3602758f-7e11-46fa-b895-315bac255ba4_717x696.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:696,&quot;width&quot;:717,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NCxX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 424w, https://substackcdn.com/image/fetch/$s_!NCxX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 848w, https://substackcdn.com/image/fetch/$s_!NCxX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 1272w, https://substackcdn.com/image/fetch/$s_!NCxX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3602758f-7e11-46fa-b895-315bac255ba4_717x696.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Mainstream media perpetuating TurboQuant fears. Source: Financial Times</figcaption></figure></div><p>The PC/smartphone industry is the literal whale of the memory market consuming massive amounts of DDR, LPDDR and NAND flash storage. They have largely been squeezed out of the memory supply chain of late due to large spikes in DRAM and NAND pricing primarily driven by shortages due to the huge demand in AI accelerators, which consume nearly 70% of global high-end DRAM production. The recent TrendForce memory forecast below shows why pricing has reached a boiling point.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PB4-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PB4-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 424w, https://substackcdn.com/image/fetch/$s_!PB4-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 848w, https://substackcdn.com/image/fetch/$s_!PB4-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 1272w, https://substackcdn.com/image/fetch/$s_!PB4-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PB4-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png" width="1080" height="560" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:560,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PB4-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 424w, https://substackcdn.com/image/fetch/$s_!PB4-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 848w, https://substackcdn.com/image/fetch/$s_!PB4-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 1272w, https://substackcdn.com/image/fetch/$s_!PB4-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33e99a6d-54d8-4158-aff8-a9a92c07e73e_1080x560.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The prices for nearly every type of memory and storage has nearly doubled in a single quarter, which means that memory alone is estimated to occupy nearly 25-30% of a PC/Smartphone bill of materials (BoM). Two other factors further compound problems:</p><ol><li><p>The long term agreements for memory supply entered after the Covid era bust are coming to an end. Consumer OEMs are being forced into an open market where there is virtually no silicon left to purchase.</p></li><li><p>The timing is particularly devastating for the PC industry, as this hyper-inflationary spike collides directly with the <a href="https://support.microsoft.com/en-us/windows/windows-10-support-has-ended-on-october-14-2025-2ca8b313-1946-43d3-b55c-2b95b107f281">Microsoft Windows 10 end-of-life</a> hardware refresh cycle.</p></li></ol><p>Dan Nystedt on X <a href="https://x.com/dnystedt/status/2038200585086972169'">explains</a> how the PC/smartphone industry has now drawn a line in the sand by planning &#8220;fewer (or zero) mid-to-low-end handsets in 2026.&#8221; &#8211; a sign often viewed by seasoned memory investors as a time to sell. IDC <a href="https://www.idc.com/resource-center/press-releases/idc-cuts-2026-pc-outlook-to-11-3-as-memory-shortages-and-supply-chain-disruptions-persist-into-2027/">reports</a> that global PC shipments are expected to drop by 11.3% in 2026, a revision that calls out a much larger drop than their 2.4% estimate a quarter ago.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GdC2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GdC2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 424w, https://substackcdn.com/image/fetch/$s_!GdC2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 848w, https://substackcdn.com/image/fetch/$s_!GdC2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 1272w, https://substackcdn.com/image/fetch/$s_!GdC2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GdC2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png" width="831" height="647" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:647,&quot;width&quot;:831,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GdC2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 424w, https://substackcdn.com/image/fetch/$s_!GdC2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 848w, https://substackcdn.com/image/fetch/$s_!GdC2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 1272w, https://substackcdn.com/image/fetch/$s_!GdC2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe57ff502-6a0b-43b7-ac7e-034c26ba0a27_831x647.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: IDC</figcaption></figure></div><p>Conventional wisdom and history lessons in the memory market dictate that the memory market has peaked. But, bulls argue the insatiable AI demand makes this time different.</p><h3>&#8220;This Time is Different&#8221; &#8211; But is it?</h3><p>The simple bull argument for the market is that the extraordinary demand for AI will soak up any short term demand decline by the PC/smartphone industry. That is, AI is a &#8220;fallback&#8221; when consumer demand drops and memory prices stay unaffected. But how valid is this?</p><p>This argument holds for now because memory makers are happy to divert production lines towards HBM and server-grade DRAM memory as <a href="https://investors.micron.com/news-releases/news-release-details/micron-announces-exit-crucial-consumer-business">Micron did by dropping the Crucial line of consumer-grade memory</a>. This worked out well because Micron&#8217;s last earnings call guided for an 81% gross margin, which would exceed even Nvidia&#8217;s if they deliver. Their non-HBM product lines are also more profitable now, thanks to inflated spot market pricing. Spot pricing is a double-edged sword because in a deflationary market, the same product lines will drag down revenue.</p><p>Interestingly, history shows that there has almost always been a series of overlapping technology S-curves that acted as &#8220;fallback&#8221; and delayed (<em>not</em> avoided) the eventual memory market destruction.</p><p>Here are some examples.</p><h4>2015-2016: Smartphone Maturation + Cloud Buildout</h4><ul><li><p><strong>The Softening</strong>: In early 2015, the global smartphone adoption curve began to flatten and the upgrade cycle started stretching from 18 months to 24+ months. PC sales were also in a secular decline. With softening demand, smartphone OEMs started aggressively pushing back on memory contract prices.</p></li><li><p><strong>The Fallback</strong>: The early cloud build-out meant that AWS, Azure, and Google Cloud were beginning to scale their massive data centers. This created a new, hungry sink for server DRAM and NAND SSDs, absorbing wafers that would have otherwise flooded the mobile and PC markets.</p></li><li><p><strong>The Crash</strong>: The crash came around early 2016 because cloud buildouts could not effectively absorb the PC/smartphone softening, leading to an inventory glut and prices plummeting.</p></li></ul><h4>2018-2019: Smartphone Decline + Crypto Mania</h4><ul><li><p><strong>The Softening</strong>: Smartphone sales went from slowing to actually contracting globally. At the same time, the memory prices from the 2017 cloud supercycle had gotten so high that smartphone OEMs actively decided to cap the amount of DRAM in their flagship phones to protect their profit margins.</p></li><li><p><strong>The Fallback</strong>: During the crypto mining boom, hyperscalers were buying server memory at a frantic pace, terrified of supply shortages. The 2017&#8211;2018 crypto bull run created an insatiable demand for high-end graphics cards, sucking up massive amounts of GDDR and standard PC DRAM.</p></li><li><p><strong>The Crash</strong>: By 2019, cloud hyperscalers realized they had massively over-ordered DRAM, and switched to inventory digestion mode. Then came the crypto winter. Bitcoin crashed when people realized it was unprofitable to mine crypto and the demand for graphics cards and GDDR evaporated overnight.</p></li></ul><h4>2022-2023: Post-Covid Demand Drop</h4><ul><li><p><strong>The Softening</strong>: By early 2022, everyone who needed a laptop or phone for remote work already had one. As inflation spiked, consumer spending on electronics completely froze. PC and smartphone OEMs hit the brakes hard, cancelling memory orders to burn through their existing warehouses of chips.</p></li><li><p><strong>The Fallback</strong>: There was none. Cloud hyperscalers were not on a spending spree for memory, and as a result, nothing softened the blow.</p></li><li><p><strong>The Crash</strong>: Because memory makers had aggressively built new fabs during the 2021 boom, the simultaneous collapse of both the consumer main market and the enterprise fallback created the worst inventory glut in the history of the memory industry, forcing memory makers into deep operating losses.</p></li></ul><p>Memory cycles have mostly had an infrastructure buildout softening the blow when consumer demand drops. The question now is: will this happen again with the AI infrastructure buildout?</p><p><strong>After the paywall:</strong></p><ul><li><p><strong>Understanding the New AI Realities:</strong> The four structural differences&#8212;memory density, wafer efficiency, allocation shift, and contract stickiness&#8212;that make current AI infrastructure demand fundamentally unique.</p></li><li><p><strong>Memory as the Bottleneck:</strong> Why memory, particularly for long context windows and inference, has replaced compute as the primary bottleneck in AI systems.</p></li><li><p><strong>Revisiting the Thesis and Risks:</strong> Evaluates the AI buildout as a structural floor for demand, and provides perspectives on the 2026 memory pullback and risks.</p></li></ul>
      <p>
          <a href="https://www.viksnewsletter.com/p/evaluating-memory-hog-cycles-in-the-ai-era">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: TurboQuant, AGI CPU, GF Sues Tower, CPU Price Hikes, 400G SiPho, AAOI, ++]]></title><description><![CDATA[What's your Weissman score?]]></description><link>https://www.viksnewsletter.com/p/twic-turboquant-agi-cpu-gf-sues-tower</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-turboquant-agi-cpu-gf-sues-tower</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 27 Mar 2026 10:02:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VxHl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/">TurboQuant</a> took the center stage this week &#8212; a Google research blog that showed up even though the paper has been on Arxiv for a year. The &#8220;Pied Piper&#8221; compression algorithm of the KV-cache world scared the memory industry into selling off stock.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VxHl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VxHl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VxHl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VxHl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VxHl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VxHl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg" width="1240" height="930" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:930,&quot;width&quot;:1240,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Where is HBO Silicon Valley's Real Pied Piper? Look in Ayr, Scotland - IEEE  Spectrum&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Where is HBO Silicon Valley's Real Pied Piper? Look in Ayr, Scotland - IEEE  Spectrum" title="Where is HBO Silicon Valley's Real Pied Piper? Look in Ayr, Scotland - IEEE  Spectrum" srcset="https://substackcdn.com/image/fetch/$s_!VxHl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VxHl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VxHl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VxHl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8befdc01-a9f0-4106-b8ad-fea55d2317ad_1240x930.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">TurboQuant authors from Google Research seen here judging life choices of memory investors.</figcaption></figure></div><p>I wrote up a quick note because a few people on the Substack chat wanted to know what TurboQuant is and what implications it has. I hear the article has made rounds in investor circles.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;6e8e83f9-84ea-47c7-a26f-db4bfe154ff7&quot;,&quot;caption&quot;:&quot;Google&#8217;s blog post on a quantization technique called TurboQuant caused a sharp selloff in memory stocks yesterday in the fear that KV-cache usage will drop significantly, and that memory will no longer be a concern. The actual method was published nearly a year ago, but Google&#8217;s resurfacing of its prior research is what is causing the jitters.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;TurboQuant: Inner Workings and Implications&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-03-26T07:43:23.712Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ToSn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c97aef9-a8a2-43a8-9eb3-3b6af92668eb_1250x561.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/turboquant-inner-workings-and-implications&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192180598,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:13,&quot;comment_count&quot;:2,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>The other deep-dive from this week is on how grating couplers drive the future of TSMC COUPE for CPO, and why Himax has a fragile moat that will be quickly eroded by standardization of the COUPE-FAU interface.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;3180aecf-e188-4cd9-8fe8-72c648468c94&quot;,&quot;caption&quot;:&quot;Earlier this month, Citrini Research and Hunterbrook Media identified Himax $HIMX and FOCI (3363.TT) as potential key suppliers in NVIDIA&#8217;s co-packaged optics (CPO) supply chain - especially as related to TSMC&#8217;s COmpact Universal Photonic Engine (COUPE) that is the de-facto standard for the future of CPO in datacenters. Here, we will describe why Himax and FOCI have a near-term co-design position in COUPE but no long-term moat.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Grating Coupled COUPE for CPO, and Himax's Fragile FAU Moat&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-03-25T12:17:37.645Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!PVsy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd11961-f7b2-41b0-b17a-39e9198dbf3e_620x467.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/grating-coupled-coupe-for-cpo-and-himax-fragile-moat&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192072506,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:1,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>If you missed the last Semi Doped podcast episode, you can check it out below.</p><ul><li><p><a href="https://youtu.be/qD4Lk2Koabo?si=RKg5JkKmpbNxB2Rr">MicroLEDs Ain&#8217;t Dead, Micron Snags Vera-Rubin</a></p></li></ul><p>After this podcast, I had a chat with the guys from <a href="https://avicena.tech/">Avicena</a>, who were kind enough to explain the their tech to me in-depth. I will write a post once I understand what the industry&#8217;s hesitations are with MicroLED tech. Let me know if you have questions.</p><p>Lots of news. Let&#8217;s dive in.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Weekly news roundups are free, but the deep-dives are where the alpha is at. Get access to 100s of deep dives in the archive about AI semis. Don&#8217;t forget to expense it!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>Arm Ships Its First Chip Ever: ARM AGI</h4><p>In its 35 year history, the AGI CPU is the first fully in-house processor built by Arm, and heralds a bold entry into the space of server CPUs at a time when agentic AI applications need it the most. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cv_2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cv_2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cv_2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cv_2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cv_2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cv_2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg" width="696" height="511" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:511,&quot;width&quot;:696,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Arm Everywhere Event Arm AGI CPU Launch 5&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Arm Everywhere Event Arm AGI CPU Launch 5" title="Arm Everywhere Event Arm AGI CPU Launch 5" srcset="https://substackcdn.com/image/fetch/$s_!cv_2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cv_2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cv_2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cv_2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dad3187-8406-4ef0-a1fd-4009b8b23b3d_696x511.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Arm Everywhere event, via ServeTheHone</figcaption></figure></div><p>The AGI is built on TSMC 3nm with 136 Neoverse V3 cores, and 300W TDP. Their angle into the CPU market is power efficiency over x86 CPUs, which translates to better TCO when deploying at scale. Their high density air-cooled rack implementation can house 8,160 cores (60 CPUs) and liquid-cooled version can hold &gt;45,000+ cores (&gt;330 CPUs, &gt;5x more). </p><p>Arm estimates that agentic AI will drive <strong>4x more CPU utilization per deployed GW</strong>. <a href="https://www.viksnewsletter.com/p/beyond-gtc-a-deep-dive-into-compute-lpx?r=222kot">As we saw with Vera Rubin</a>, the CPU:GPU ratio is already 1:1 at the rack scale. Even if Arm is overestimating, that&#8217;s still a lot more CPUs. <a href="https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai?r=222kot">Like we predicted in an earlier post</a>, the CPU:GPU ratio is about to get a lot higher from what we have been used to.</p><p>Meta is their lead customer for the AGI CPU, which will possibly run as head-node CPU for the rapid-fire MTIA chip releases planned in the coming two years. </p><p>CEO Rene Haas projected $15B in annual revenue from the chip within five years, with total company revenue reaching $25B and EPS of $9. In a <a href="https://stratechery.com/2026/an-interview-with-arm-ceo-rene-haas-about-selling-chips/">Stratechery interview</a>, Haas said memory (not TSMC) is the real bottleneck and his comments about the relative importance of core counts is interesting: </p><blockquote><p>I think core count is going to be quite important because I think, again, I have a belief that each one of these cores will want to potentially run their own agent, launch a hypervisor job, launch a job that can be run independently, launch it, get the work done, go to sleep. The performance of the core is going to matter, no doubt about it, but I think the efficiency of that core is probably going to matter just as much as the performance is.</p></blockquote><p>Digitimes <a href="https://www.digitimes.com/news/a20260326PD220/arm-guc-cpu-agi-revenue.html">noted</a> that GUC is likely the first casualty of Arm selling chips directly, given its CPU work with Google and Meta. TechX is wondering when Qualcomm Oryon server CPUs will show up given that development was put on hold in favor of mobile chips during their lawsuit with Arm.</p><p>(via <a href="https://newsroom.arm.com/news/arm-agi-cpu-launch">ARM</a>)</p><h4>CPU Shortage Gets Real: Intel and AMD Hike Prices</h4><p>Arm&#8217;s CPU provides options at a time when Intel (INTC) and AMD are raising CPU prices again, up 10-15% since start of the year. Seemingly lead times have ballooned from ~2 weeks to up to 6 months for some SKUs. The question is whether this is a temporary supply hiccup (Intel&#8217;s CFO says Q1 was peak shortage) or a structural tightening as agentic AI workloads drive 4x more CPU demand per GW of data center capacity (per Arm&#8217;s Haas). Also, price hikes are a way to keep revenue up when supply is low and demand is high.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/jukan05/status/2036707840342270282?s=20&quot;,&quot;full_text&quot;:&quot;CPU shortage becomes reality&#8230; Intel and AMD raise prices again in March, up as much as 15% since the start of the year\n\n&#8226; Intel and AMD raised CPU prices again in March, bringing their cumulative price increases this year to around 10&#8211;15%. At the same time, lead times have&quot;,&quot;username&quot;:&quot;jukan05&quot;,&quot;name&quot;:&quot;Jukan&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2036967296858808323/UtRDDxV1_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-25T07:32:43.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:19,&quot;retweet_count&quot;:74,&quot;like_count&quot;:634,&quot;impression_count&quot;:135225,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><h4>TSEM + COHR 400G/lane SiPho Using Si Modulators</h4><p>Tower Semiconductor and Coherent jointly demonstrated 400 Gbps/lane data transmission using a silicon modulator built on Tower&#8217;s production-ready silicon photonics (SiPho) process, achieving a clear open eye at 420 Gb/s PAM4. Apparently, this was demonstrated in OFC 2026 which I could not go to, but would love to see the slides/paper if anyone from Tower or Coherent reading this wants to share. </p><p>If we look back at Broadcom&#8217;s recent <a href="https://www.viksnewsletter.com/i/190636787/avgo-broadcom-delivers-industrys-first-400glane-optical-dsp-for-next-generation-ai-networks">400G optical DSP announcement</a>, there was a mention of thin-film lithium niobate (TFLN) used for the design of high bandwidth modulators required for the 3.2T generation of optical transceivers. If silicon modulators using micro-rings or Mach-Zehnder modulators (not sure which one was used) can work at 3.2T (400G &#215;8), then there is no need for special TFLN deposition steps in the wafer processing flow for several years to come. Since Tower is investing $920M in capex towards SiPho/SiGe capability, using existing infrastructure means high fab utilization rates. The modulators used Coherent&#8217;s InP CW laser, which requires narrow linewidth, low relative intensity noise (RIN) and low side-mode suppression ratios (SMSR) for it to work. Apparently, it&#8217;s good enough for 400G/lane.</p><p>(via <a href="https://ir.towersemi.com/news-releases/news-release-details/tower-semiconductor-and-coherent-demonstrate-400gbpslane-data">Tower Semiconductor</a>)</p><h4>&#8230; Then GF Sues Tower</h4><p>GlobalFoundries (GFS) filed patent suits against Tower Semi alleging that (emphasis mine):</p><blockquote><p>&#8230; [Tower] has infringed GF patents <strong>by freeriding on decades of GF innovation</strong> with an intent to unlawfully take business away from the American chipmaker.</p></blockquote><p>The &#8216;freeriding&#8217; refers to 11 patents in areas related to smart mobile, automotive, aerospace and communications infrastructure. Yeah, so basically everything &#128518; GF seeks to ban the import of Tower chips into the US, and be compensated for lost profits. Tower intends to put a stake in the ground and fight.</p><p>Tower shares closed down 7.5% and GlobalFoundries shares fell 4.6%. If you want to read too much into it, this infringement law suit comes after <a href="https://towersemi.com/2026/02/05/02052026/#:~:text=Tower%20Semiconductor%20focuses%20on%20creating,by%20such%20forward%2Dlooking%20statements.">Tower&#8217;s deal with Nvidia</a> to supply 1.6T optical transceivers. Lawsuits bad for everybody, except lawyers &#128305;. Austin and I chatted about it in the upcoming Semi Doped episode. Stay tuned.</p><p>(via <a href="https://gf.com/gf-press-release/globalfoundries-files-patent-infringement-lawsuits-against-tower-semiconductor-to-protect-high-performance-american-chip-innovation/">GlobalFoundries</a>)</p><h4>AAOI Lands Back-to-Back Hyperscaler Orders</h4><p>Applied Optoelectronics (AAOI) received a $53M order for 800G single-mode transceivers from a major hyperscaler (likely AWS), with shipments starting Q2 and completing mid-Q3. This follows a $200M+ 1.6T order from the same customer the prior week. The stock jumped 20% on the news. The back-to-back orders across both 800G and 1.6T generations position AAOI as a meaningful supplier to at least one hyperscaler&#8217;s GPU cluster buildout, not just a niche player anymore.</p><p>(via <a href="https://newsroom.ao-inc.com/news-releases/aoi-receives-new-order-for-800g-data-center-transceivers-from-major-hyperscale-customer/">Applied Optoelectronics</a>)</p><h4>Micron Crushes, Then Gets Crushed by TurboQuant</h4><p>Micron (MU) posted a blowout FQ2: $8.05B revenue, record 75% gross margin, HBM sold out through end of 2026, and FQ3 guidance of $8.8B that blew past the $7.95B buy-side expectation. All good right? No.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MUtO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MUtO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 424w, https://substackcdn.com/image/fetch/$s_!MUtO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 848w, https://substackcdn.com/image/fetch/$s_!MUtO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 1272w, https://substackcdn.com/image/fetch/$s_!MUtO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MUtO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png" width="570" height="539" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:539,&quot;width&quot;:570,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:59510,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/191880853?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MUtO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 424w, https://substackcdn.com/image/fetch/$s_!MUtO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 848w, https://substackcdn.com/image/fetch/$s_!MUtO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 1272w, https://substackcdn.com/image/fetch/$s_!MUtO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ae1c5c-995e-4243-9a69-49c2dfc8d583_570x539.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Google happened to resurface a year-old KV cache compression paper called TurboQuant and the stock cratered, down 25% from its highs by midweek with Sandisk dragged down 8% in sympathy. This is an overreaction to a year old paper, and nothing fundamentally changes the demand for memory and storage. More coverage in linked article above.</p><p>(via <a href="https://www.cnbc.com/2026/03/26/google-ai-turboquant-memory-chip-stocks-samsung-micron.html">CNBC</a>)</p><h4>Helium Spot Price Surges 50%+ on Qatar Attacks</h4><p>It&#8217;s not often we discuss raw materials in this newsletter, but this one is important. Iranian attacks on Qatar&#8217;s Ras Laffan LNG complex knocked out roughly 17% of Qatar&#8217;s LNG export capacity, triggering a 50%+ spike in helium spot prices. Also, remember that Helium is a finite resource on Earth that when wasted is lost forever because it leaves the atmosphere and escapes into space. Why does this matter other than for birthday balloons?</p><p>Helium is critical across implantation, lithography cooling, and leak detection in semiconductor manufacturing. Helium&#8217;s inertness is good for purity testing, and its small atomic size is useful for leak testing where Helium goes through but not other gases. It&#8217;s not a major component, but supply disruption is a concern. Samsung and SK Hynix are most exposed (sourcing 64.7% from Qatar versus TSMC&#8217;s 30%) and are stockpiling at elevated prices. Sure Helium is recycled after use, but still needs replenishig.</p><p>The crisis is compounding broader materials cost pressure: photolithography solvents are up 40-50%, and suppliers including DuPont and LG Chem have issued price hike notices for April. The main issue is the medium to long term implications of Helium supply because disruptions often take years to recover. </p><p>(via <a href="https://www.trendforce.com/news/2026/03/27/news-iran-conflict-reportedly-drives-50-helium-spot-price-surge-samsung-sk-hynix-on-high-alert/">Trendforce</a>)</p><div><hr></div><p>Have a great weekend!</p>]]></content:encoded></item><item><title><![CDATA[TurboQuant: Inner Workings and Implications]]></title><description><![CDATA[Google's year old research resurfaced in a recent blog post and spooked memory investors.]]></description><link>https://www.viksnewsletter.com/p/turboquant-inner-workings-and-implications</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/turboquant-inner-workings-and-implications</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Thu, 26 Mar 2026 07:43:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ToSn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c97aef9-a8a2-43a8-9eb3-3b6af92668eb_1250x561.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Google&#8217;s blog post on a quantization technique called <a href="https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/">TurboQuant</a> caused a sharp selloff in memory stocks yesterday in the fear that KV-cache usage will drop significantly, and that memory will no longer be a concern. The actual method was published nearly a year ago, but Google&#8217;s resurfacing of its prior research is what is causing the jitters.</p><p>This is not the first time that efficiency improvements, or niche use cases have triggered a selloff in the memory sector. The announcement of SRAM accelerators worried investors that HBM will no longer be useful, for example. Investors in memory have seen cyclical booms and busts, and their trigger happiness to move stocks is not without reason. Given the run up in memory stocks, everyone is looking for the top.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/subscribe?"><span>Subscribe now</span></a></p><p>A few people have asked me to explore TurboQuant in a bit more depth. TurboQuant is a 2-stage algorithm involving PolarQuant and QJL, that allows KV-cache to physically occupy fewer bits and thus allow for longer context windows in LLMs.</p><p>In this short note, we&#8217;ll go through TurboQuant briefly and tie it to what it means for the industry.</p>
      <p>
          <a href="https://www.viksnewsletter.com/p/turboquant-inner-workings-and-implications">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Grating Coupled COUPE for CPO, and Himax's Fragile FAU Moat]]></title><description><![CDATA[An engineering deep dive into TSMC's COUPE optical engine, explaining why the co-design position of Himax makes them critical near-term suppliers for NVIDIA&#8217;s CPO, but without a long-term moat.]]></description><link>https://www.viksnewsletter.com/p/grating-coupled-coupe-for-cpo-and-himax-fragile-moat</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/grating-coupled-coupe-for-cpo-and-himax-fragile-moat</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Wed, 25 Mar 2026 12:17:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PVsy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd11961-f7b2-41b0-b17a-39e9198dbf3e_620x467.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Earlier this month, <a href="https://www.citriniresearch.com/p/let-there-be-light">Citrini Research</a> and <a href="https://hntrbrk.com/himax/">Hunterbrook Media</a> identified Himax $HIMX and FOCI (3363.TT) as potential key suppliers in NVIDIA&#8217;s co-packaged optics (CPO) supply chain - especially as related to TSMC&#8217;s COmpact Universal Photonic Engine (COUPE) that is the de-facto standard for the future of CPO in datacenters. Here, we will describe why Himax and FOCI have a near-term co-design position in COUPE but no long-term moat.</p><p>Citrini/Hunterbrook traced a trail through four patents (two Himax, one FOCI, one TSMC) and found that both Himax&#8217;s manufacturing patent and TSMC&#8217;s Fiber Array Unit (FAU) inspection patent describe a 22-channel configuration (standard fiber counts in optics are 12, 16, 24, and 32). The unusual but matching channel count across patents was considered the &#8220;smoking gun&#8221; that ties Himax and FOCI as a key TSMC supplier. <a href="https://photoncap.substack.com/p/citrinihunterbrook-himx-report-analysis">PhotonCap</a> has a patent-by-patent analysis that is worth reading for the full forensic detail. However, <a href="https://irrationalanalysis.substack.com/p/citrini-3122026-optics-basket-comments">Irrational Analysis</a> counters Citrini&#8217;s patent-centric approach stating:</p><blockquote><p>There is zero discussion on coupling efficiency, polarization extinction ratio, or alignment tolerance.</p><p>The most dangerous analysis looks smart (check out these patents!) but actually delivers zero insight into the underlying viability of the tech.</p><p>These micro-lense assemblies are obviously for grating coupler attach. Grating couplers have high loss compared to edge couplers so the lenses better be damn good. Zero evidence this is the case.</p></blockquote><p>In this article, we will look at the engineering details that are missing in the Himax/FOCI/TSMC conversation. We will cover:</p><ul><li><p>Physics of edge vs grating couplers (free)</p></li><li><p>Implementation details of TSMC COUPE that makes it low-loss and well suited to CPO</p></li><li><p>Need for fiber array unit (FAU) co-design between TSMC and Himax/FOCI</p></li><li><p>Is Himax actually a good bet? How solid is their moat?</p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe for weekly updates from the frontlines of AI semis. Consider upgrading to a paid tier for access to 100s of deep-dives like this one. You can expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you are not a paid subscriber, you can purchase just this article using the button below. You can find the whole catalog of articles for purchase <a href="https://www.semiexponent.com/products">at this link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/grating-coupled-coupe-for-cpo-and-himax-s-fragile-fau-moat&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/grating-coupled-coupe-for-cpo-and-himax-s-fragile-fau-moat"><span>Buy the ebook</span></a></p><div><hr></div><h3>Physics of Edge vs Grating Couplers</h3><p>Much of the discussion to follow requires an understanding of how light is coupled from a fiber into a chip, and how the entire flow of light works. We will cover this in some detail because it is necessary to understand what Himax+FOCI brings to the table, and how critical their technology is to the CPO supply chain.</p><p>There are two ways to get light on or off a photonic chip: edge and grating couplers.</p><h4>Edge couplers</h4><p>Edge coupling is the method of coupling light into the side of the chip. This is conceptually simple because the Photonic IC (PIC) has silicon nitride (SiN) waveguides into which light can be coupled from an optical fiber by narrowing the beam with a microlens, and using a waveguide taper where the fiber connects.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j9at!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j9at!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 424w, https://substackcdn.com/image/fetch/$s_!j9at!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 848w, https://substackcdn.com/image/fetch/$s_!j9at!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 1272w, https://substackcdn.com/image/fetch/$s_!j9at!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j9at!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png" width="594" height="335.2365269461078" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:754,&quot;width&quot;:1336,&quot;resizeWidth&quot;:594,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j9at!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 424w, https://substackcdn.com/image/fetch/$s_!j9at!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 848w, https://substackcdn.com/image/fetch/$s_!j9at!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 1272w, https://substackcdn.com/image/fetch/$s_!j9at!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F433b2989-40ea-4d1f-b1db-d4851de5987b_1336x754.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Since this only involves the light moving from one medium to another without really changing directions, most of the light energy couples to the PIC and loss is low. This is an oversimplification, but is sufficient here. Additionally, both wave-guiding media &#8211; optical fiber and waveguides on the PIC &#8211; support both transverse electric (TE) and transverse magnetic (TM) polarizations, which are wave oscillations of the electromagnetic field as light propagates.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!35kW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!35kW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 424w, https://substackcdn.com/image/fetch/$s_!35kW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 848w, https://substackcdn.com/image/fetch/$s_!35kW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 1272w, https://substackcdn.com/image/fetch/$s_!35kW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!35kW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png" width="305" height="220.57093425605535" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:209,&quot;width&quot;:289,&quot;resizeWidth&quot;:305,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!35kW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 424w, https://substackcdn.com/image/fetch/$s_!35kW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 848w, https://substackcdn.com/image/fetch/$s_!35kW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 1272w, https://substackcdn.com/image/fetch/$s_!35kW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59d6a29f-0eba-4a60-815c-bbbe22dd0678_289x209.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Source ntt-review.jp</figcaption></figure></div><p>If the light transition from optical fiber to waveguide involves either TE or TM being attenuated, the coupler is said to have &#8220;polarization dependent loss, or PDL.&#8221; An edge coupler usually allows both TE and TM modes through, and has low coupling and PDL loss. But there are several downsides:</p><ol><li><p><strong>Limited beach-front density</strong>: Using only the edges of the PIC to couple light limits how many fibers can be placed around the periphery. This limits the number of ports in a CPO switch.</p></li><li><p><strong>High alignment requirements</strong>: The optical fiber requires to be precisely aligned (&lt;1 micron) to the waveguide taper so that all the low loss benefits are retained.</p></li><li><p><strong>Singulation before testing</strong>: Each PIC must be diced from the wafer and tested by coupling from the edge. This makes testing low throughput and expensive.</p></li></ol><p>Broadcom&#8217;s Tomahawk 4 (Humboldt) and 5 (Bailly) CPO switch chips are examples that use edge coupling to attach fiber to the chips. Below is a quick survey of recent edge couplers in C- and O-bands. The TE0 and TM0 modes both have very low loss, and the PDL is near zero which implies that both modes are coupled well.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/ULb9q/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97611cd7-07f0-407d-9e12-02049b2a0751_1220x1118.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77111c2c-a9fd-4c65-970a-a3460c8c5f42_1220x1242.png&quot;,&quot;height&quot;:644,&quot;title&quot;:&quot;Edge coupler designs&quot;,&quot;description&quot;:&quot;A survey of recent edge coupler designs reported.&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/ULb9q/1/" width="730" height="644" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><h4>Grating Couplers</h4><p>A grating coupler allows light to enter or exit the PIC from the top surface rather than the edge. It is essentially a series of periodic etchings (grooves) on the surface of a waveguide. When light traveling through the waveguide hits these grooves, it scatters. By precisely spacing these grooves, the scattered light undergoes <strong>constructive interference</strong> at a specific angle, directed upward out of the chip. Conversely, light coming from a fiber at that same angle will interfere constructively to &#8220;drop&#8221; into the waveguide.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Pz0s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Pz0s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 424w, https://substackcdn.com/image/fetch/$s_!Pz0s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 848w, https://substackcdn.com/image/fetch/$s_!Pz0s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 1272w, https://substackcdn.com/image/fetch/$s_!Pz0s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Pz0s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png" width="1135" height="295" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/faf048a0-1db2-4e69-985e-71a31130f138_1135x295.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:295,&quot;width&quot;:1135,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Pz0s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 424w, https://substackcdn.com/image/fetch/$s_!Pz0s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 848w, https://substackcdn.com/image/fetch/$s_!Pz0s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 1272w, https://substackcdn.com/image/fetch/$s_!Pz0s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaf048a0-1db2-4e69-985e-71a31130f138_1135x295.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Light traveling in a PIC waveguide hits grooves in silicon, and scatters upward. Source: EMopt.</figcaption></figure></div><p>Here is a simulated side and top view of the field that transitions from a waveguide to scatter vertically. It is important to note that scattering happens in both directions. Even though one direction is predominant, the loss of light on the other side contributes to loss. We will revisit this point later in TSMC COUPE.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C5Ru!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C5Ru!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 424w, https://substackcdn.com/image/fetch/$s_!C5Ru!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 848w, https://substackcdn.com/image/fetch/$s_!C5Ru!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 1272w, https://substackcdn.com/image/fetch/$s_!C5Ru!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C5Ru!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png" width="784" height="388" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:388,&quot;width&quot;:784,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C5Ru!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 424w, https://substackcdn.com/image/fetch/$s_!C5Ru!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 848w, https://substackcdn.com/image/fetch/$s_!C5Ru!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 1272w, https://substackcdn.com/image/fetch/$s_!C5Ru!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0be930b6-9471-44dc-80d0-14001ed265ac_784x388.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Electric field shows light coming in and making a right angle out of the top of the chip. Source: DOI:10.1109/LPT.2022.3221527</figcaption></figure></div><p>There are some immediate advantages that become apparent in grating couplers:</p><ul><li><p><strong>Beach-front area</strong>: The entire 2D surface of the chip is available to connect fibers, which greatly increases the density of fiber that can be connected to a chip compared to using the edge alone.</p></li><li><p><strong>Lower alignment requirements</strong>: Grating couplers provide significantly relaxed fiber alignment tolerances compared to edge couplers, typically featuring tolerances of 2-4 microns either way from the center.</p></li><li><p><strong>Wafer level testing</strong>: Since light comes out of the top of the chip, PICs can be tested before dicing, greatly simplifying the process and increasing throughput.</p></li></ul><p>In spite of these advantages, there are some downsides.</p><ul><li><p><strong>Higher loss</strong>: Grating couplers are inherently higher losses due to the scattering effects that occur in the grating structure. Lots of research has been done to minimize this.</p></li><li><p><strong>High PDL</strong>: Grating structures only work for one of the polarizations, while the other undergoes high loss.</p></li></ul><p>The impact of high loss in gratings implies that link budgets are smaller when going to faster data rates, and everything that follows &#8211; the microlenses, prisms, and FAUs &#8211; must all be optimized for the lowest loss possible. High PDL implies that data rate cannot be doubled easily by putting information on each polarization (polarization multiplexing), which becomes a limiting factor as we look to scale data rates higher. Here is a recent survey of grating couplers if you&#8217;re interested.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/N6Rl1/2/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6e11c860-05c2-4be2-b22f-8a61eeb1ee5e_1220x1056.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bdb61d2-027f-46e9-88af-d7be0c2172e4_1220x1180.png&quot;,&quot;height&quot;:595,&quot;title&quot;:&quot;Grating coupler designs&quot;,&quot;description&quot;:&quot;Recently reported grating coupler designs in literature.&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/N6Rl1/2/" width="730" height="595" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>After the paywall, we discuss:</p><ul><li><p>The engineering behind TSMC COUPE, including the implementation details that makes it low-loss and well suited to CPO.</p></li><li><p>Why the fiber array unit (FAU) needs to be a co-design between TSMC and Himax/FOCI</p></li><li><p>Discussion on whether Himax is actually a good bet, and how solid their moat is.</p></li></ul>
      <p>
          <a href="https://www.viksnewsletter.com/p/grating-coupled-coupe-for-cpo-and-himax-fragile-moat">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: MU HBM4, OCI MSA, Credo Optics, VCSEL Scale-Up, DGX GB300, NBIS $27B]]></title><description><![CDATA[Hot takes from the AI oven.]]></description><link>https://www.viksnewsletter.com/p/twic-mu-hbm4-oci-msa-credo-optics</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-mu-hbm4-oci-msa-credo-optics</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 20 Mar 2026 10:39:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/upload/w_1028,c_limit,q_auto:best/sdblyqjgytbwbw6dwowq" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Way too much news this week due to simultaneous GTC and OFC events. We&#8217;ll only hit a few topics even though there is a lot more out there. We covered GTC already with both a deep dive and a podcast, so we won&#8217;t dwell on it today.</p><ul><li><p><strong>Deep dive</strong>: <a href="https://www.viksnewsletter.com/p/beyond-gtc-a-deep-dive-into-compute-lpx">Beyond GTC: A Deep Dive into Compute, LPX, and the Untold Story of SpecDec</a></p></li><li><p><strong>Semi Doped Podcast</strong>: <a href="https://youtu.be/UfaIU6h0YdY?si=1s2KorILuXrkz_Nx">Quick Takes: Nvidia GTC keynote</a></p></li></ul><p>We genuinely feel this was one of our spontaneous and fun episodes on Semi Doped. Highly recommend you check it out. We have crossed 1,100 subs on YouTube and 7,500 podcast downloads already!</p><p>Now on to the news.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>If you like the weekly roundups, you&#8217;ll enjoy the deep dives for paid subscribers. Highly researched, deep technical content, translated into what it means for your investments.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>Micron supplying &#8220;fastest&#8221; HBM4 to Nvidia Vera Rubin</h4><p>A SemiAnalysis report published earlier this year declared that Micron will be supplying <em>zero</em> HBM4 to Nvidia in the early Rubin ramp last month. But those have largely been quelled now with Jensen publicly touting their &#8220;fastest HBM4&#8221; delivered. <a href="https://enertuition.substack.com/p/micron-out-of-initial-nvidia-rubin">Several</a> <a href="https://x.com/FundaAI/status/2019787601272824091?s=20">folks</a> pushed back on this at the time, and reiterated that selling <span class="cashtag-wrap" data-attrs="{&quot;symbol&quot;:&quot;$MU&quot;}" data-component-name="CashtagToDOM"></span> on this rumor was a bad idea. They argued that even if Micron did not sell HBM to Nvidia, they are better off selling much wanted DRAM instead at elevated prices with DRAM being significantly easier to manufacture. There were also rumors earlier that the HBM4 base die being designed on a memory node (1-beta) somehow slowed down lane speeds compared to competition who used logic nodes in the base die. This was later confounded by rumors that Micron somehow had trouble measuring HBM performance. </p><p><strong>Well, all that guessing seems questionable now</strong>. Micron posted great numbers last quarter, stock is up 62% YTD, and the line still goes out the door for memory while they supply HBM4 to Vera Rubin. I am still skeptical that Micron&#8217;s HBM is actually the &#8220;fastest,&#8221; but it seems fast enough that Jensen is willing to take a picture with Sanjoy Mehrotra, CEO of Micron. Also, Jensen, some people would have wanted that memory before you signed on the wafer. Think about where you go about signing stuff; just sayin&#8217;.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/theaustinlyons/status/2033705952676036792?s=20&quot;,&quot;full_text&quot;:&quot;JENSEN TO MICRON CEO: To my friends at Micron. Our partnership changed the world. Industry&#8217;s fastest HBM4 delivered to NVIDIA.\n\nSeems like Micron HBM4 is a go. \n$MU $NVDA <span class=\&quot;tweet-fake-link\&quot;>@MicronTech</span> <span class=\&quot;tweet-fake-link\&quot;>@NVIDIAGTC</span> &quot;,&quot;username&quot;:&quot;theaustinlyons&quot;,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1972631213124075520/ZUcO_UIY_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-17T00:44:18.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HDkswtYbEAMY0Cc.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/WClrH3vslA&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:7,&quot;retweet_count&quot;:34,&quot;like_count&quot;:294,&quot;impression_count&quot;:58236,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>(via <a href="https://investors.micron.com/news-releases/news-release-details/micron-high-volume-production-hbm4-designed-nvidia-vera-rubin">Micron</a>)</p><div><hr></div><h4>Optical scale-up goes official with OCI MSA</h4><p>A bunch of important companies got together and decided to put some method to the madness in optical networking. The Optical Compute Interconnect Multi-Source Agreement (<a href="https://oci-msa.org/">OCI MSA</a>) is a way to make sure everyone is talking the same language when we go to optical scale up. This allows infrastructure buildouts to use components from different vendors who all comply to the open standard, thus fostering a robust ecosystem of optical networking and avoiding getting everyone locked in to how any one company does things. This is an important step forward, because it makes optical scale-up official, and now everyone is building towards it. In a similar vein, Lightmatter announced <a href="https://lightmatter.co/press-release/lightmatter-announces-reference-architecture-initiative-with-industry-leaders-in-the-open-compute-project-for-co-packaged-optics/">a new collaborative initiative</a> within the Open Compute Project (OCP) to create open specifications for interoperable CPO.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QWYR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QWYR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 424w, https://substackcdn.com/image/fetch/$s_!QWYR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 848w, https://substackcdn.com/image/fetch/$s_!QWYR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 1272w, https://substackcdn.com/image/fetch/$s_!QWYR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QWYR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png" width="1146" height="279" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:279,&quot;width&quot;:1146,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:47463,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/191432991?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QWYR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 424w, https://substackcdn.com/image/fetch/$s_!QWYR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 848w, https://substackcdn.com/image/fetch/$s_!QWYR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 1272w, https://substackcdn.com/image/fetch/$s_!QWYR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d78429-6c55-4d79-b4d6-4cdf279b81c0_1146x279.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(via <a href="https://oci-msa.org/">oci-msa.org</a>)</p><div><hr></div><h4>Credo: 400G/800G/1.6T Optical DSPs, ZeroFlap 800G transceivers</h4><p>Credo announced their Cardinal 1.6T optical DSP, and Robin DSPs for 400G and 800G, in addition to zero-flap 800G transceivers, showing that they do recognize that optics is going to be around. But so is copper. This puts Credo in a nice position to supply to both markets should CPO adoption get pushed out. Also, microLED is not viewed as useful by the industry, but regardless of however that turns out, it&#8217;s interesting that Credo has a &#8220;better than copper, not as good as optics&#8221; technology option that might actually prove useful. Or not. Time will tell.</p><p>I am neither a copper or optics bull (I&#8217;m both in the short to medium term), and investors need to think for themselves and stop shorting <span class="cashtag-wrap" data-attrs="{&quot;symbol&quot;:&quot;$CRDO&quot;}" data-component-name="CashtagToDOM"></span> every time there is some uptick in optics news. Jensen was clear at GTC that we are going to use both copper and optics in the near term. Vera-Rubin NVL576 will use NVL72 or NVL144 racks hooked up to each other via optics, but each rack will still use copper. Also, recognize those infamous purple cables in Meta&#8217;s MTIA 400?</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/KairosPraxis/status/2032107344239141221?s=20&quot;,&quot;full_text&quot;:&quot;Scale-up for $META's MTIA 400 custom asic: I spy with my little eye something purple.\n\n( $CRDO AECs connecting the chips to switches ) &quot;,&quot;username&quot;:&quot;KairosPraxis&quot;,&quot;name&quot;:&quot;Kairos&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2014566163502555136/1yuzI8Ok_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-12T14:52:00.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HDMMEVSXMAArjVC.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/1iOFWpjIOq&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:4,&quot;retweet_count&quot;:4,&quot;like_count&quot;:22,&quot;impression_count&quot;:3145,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>(via businesswire: <a href="https://www.businesswire.com/news/home/20260317077875/en/">Cardinal 1.6T DSP</a>, <a href="https://www.stocktitan.net/news/CRDO/credo-introduces-robin-800g-optical-dsp-family-tailored-for-next-1vlmhs78it9v.html">Robin 400/800G DSP</a>, <a href="https://www.stocktitan.net/news/CRDO/credo-launches-800g-zero-flap-optical-transceivers-engineered-for-ai-ul3s634c3mu6.html">800G-ZF</a>)</p><div><hr></div><h4>Lumentum: 1060 nm VCSEL for slow-but-wide optical scale-up</h4><p>Vertical Cavity Surface Emitting Lasers (VCSELs) are mature technology, but are not great for ultra high speed optical communication links because of their nonlinear behavior, wide spectral line width, dispersion into multiple modes, and limitations on modulation bandwidth. But you can run them at slower speeds, and then use a VCSEL array to run multiple lanes in parallel &#8212; <strong>like the HBM of optics</strong>, so to speak.</p><p>VCSELs have another major advantage in today&#8217;s day and age. From Lumentum:</p><blockquote><p>As a VCSEL-based scale-up solution, the platform also provides <strong>an independent supply chain alternative to silicon photonics and InP laser-based architectures</strong>.</p></blockquote><p>InP alternative you say? &#128064; VCSELs typically use Gallium Arsenide (GaAs) as the III-V material for laser light generation, and not InP which is in extremely short supply. Just imagine a scenario where we could do optical scale-up with GaAs VCSELs; where does that put the highly sought after InP supply chain?</p><p>Luckily for <span class="cashtag-wrap" data-attrs="{&quot;symbol&quot;:&quot;$LITE&quot;}" data-component-name="CashtagToDOM"></span> bulls , Lumentum is also the leader in InP EML lasers which is currently in high demand, and their CW lasers for CPO has some competition from Coherent, Applied Optoelectronics and others. If they corner the market with 1060 nm VCSELs for optical scale-up, now wouldn&#8217;t that be a Lumentum bull case? Oh, and if you&#8217;re wondering about the strange choice of 1060 nm wavelength, here is what Lumentum has to say:</p><blockquote><p>Compared to conventional 850nm datacom VCSELs, <strong>1060nm devices deliver improved speed capability, superior high-temperature performance, and exceptional long-term reliability.</strong></p></blockquote><p>We&#8217;re still looking at demo hardware from OFC 2026, but this is a tech to monitor closely. If you want to learn more about the &#8220;slow-but-wide&#8221; approach to optics, check out Lightmatter&#8217;s <a href="https://lightmatter.co/blog/wide-and-parallel-wins/">blog post</a>.</p><p>(via <a href="https://investor.lumentum.com/financial-news-releases/news-details/2026/Lumentum-Showcases-Breakthrough-Optical-Scale-Up-Demonstration-at-OFC-2026-Using-VCSEL-Technology/default.aspx">Lumentum</a>)</p><div><hr></div><h4>NVIDIA DGX Station GB300</h4><p>At GTC, Jensen mentioned how each company should have their agentic AI strategy, or risk falling behind. One concern for many sensitive end markets is the company information staying on premises &#8212; think healthcare, insurance companies, law firms, and even engineering firms. </p><p>The DGX GB300 is the enterprise solution that you need, because it is explicitly designed for the full develop-to-deploy pipeline, not just R&amp;D. Enterprises deploy it on-premises in their own data centers or colocation facilities so they can run private, secure, high-performance inference without sending sensitive data to public clouds. That is &#8220;local inferencing&#8221; in the enterprise sense &#8212; your data and models stay inside your infrastructure, with full control and predictable cost/latency.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kbHf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kbHf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kbHf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kbHf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kbHf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kbHf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kbHf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kbHf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kbHf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kbHf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6387b87b-0f7c-4361-b6a3-f352b8bb6941_1920x1080.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Jensen at it, signing stuff again. Karpathy trying to look happy after Jensen scribbled all over his new toy.</figcaption></figure></div><p>This machine is a beast; just look at the specs below. My guess is the cost is anywhere between $30,000-$50,000, because the GB10 version is ~$5,000. Even if my estimates are off by 2x, this machine is a no brainer to deploy within a company so that engineers can use all the tokens that their on-prem hardware supports. This kind of hardware makes it easy to treat token costs as depreciating CapEx, rather than buying tokens from cloud companies and treating it as OpEx.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EsUa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EsUa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 424w, https://substackcdn.com/image/fetch/$s_!EsUa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 848w, https://substackcdn.com/image/fetch/$s_!EsUa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 1272w, https://substackcdn.com/image/fetch/$s_!EsUa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EsUa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png" width="483" height="518" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:518,&quot;width&quot;:483,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:59283,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/191432991?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EsUa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 424w, https://substackcdn.com/image/fetch/$s_!EsUa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 848w, https://substackcdn.com/image/fetch/$s_!EsUa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 1272w, https://substackcdn.com/image/fetch/$s_!EsUa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F242c31b7-6861-40da-8aca-b00f6ac0624d_483x518.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Aside</em>: if you&#8217;re a $500K engineer reading this, make sure you cost your company half your salary in AI tokens. Tell them Jensen said so in your annual review, and explain to your manager how AI actually makes things cheaper. Post back in the comments with your annual raise.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/TFTC21/status/2034725105285353962?s=20&quot;,&quot;full_text&quot;:&quot;Jensen Huang: \&quot;If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'\&quot; &quot;,&quot;username&quot;:&quot;TFTC21&quot;,&quot;name&quot;:&quot;TFTC&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2023893257567301632/BHh-5lh5_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-19T20:14:03.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://substackcdn.com/image/upload/w_1028,c_limit,q_auto:best/l_twitter_play_button_rvaygk,w_88/sdblyqjgytbwbw6dwowq&quot;,&quot;link_url&quot;:&quot;https://t.co/CcoYHRgrpy&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:359,&quot;retweet_count&quot;:503,&quot;like_count&quot;:6916,&quot;impression_count&quot;:1668382,&quot;expanded_url&quot;:null,&quot;video_url&quot;:&quot;https://video.twimg.com/amplify_video/2034724590803722244/vid/avc1/1280x720/8iEY2DxfC8kXkrIP.mp4&quot;,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p><em>Aside&#8217;s aside</em>: Just ask the manager to buy you a DGX Station GB300.</p><p>(via <a href="https://blogs.nvidia.com/blog/gtc-2026-news/#dgx-station">Nvidia</a>)</p><div><hr></div><h4>Nebius: $27B AI Infrastructure Agreement with Meta</h4><p>Massive five-year deal, and it helps to understand how it works.</p><ul><li><p><strong>$12 billion committed/dedicated capacity</strong>: Nebius will build and reserve AI compute clusters specifically for Meta across multiple locations &#8594; first large-scale deployments of NVIDIA&#8217;s Vera Rubin platform. Deliveries start early 2027.</p></li><li><p><strong>Up to $15 billion additional</strong>: Meta has committed to buy any remaining available capacity in certain upcoming Nebius clusters (after Nebius sells to third-party customers first).</p></li></ul><p>For Meta, this means that they have secured compute capacity five years in advance. This also shows that AI infrastructure buildouts will be a hybrid of self-building and outsourcing. Also, the infrastructure spending is showing no signs of slowing down.</p><p>For Nebius, this deal is in the order of their market cap of ~$30B. The locked in utilization rate from a customer the size of Meta means that their buildout will be easier to finance. But the risks here lie in the execution timelines for datacenters and availability of power to keep them running.</p><p>(via <a href="https://nebius.com/newsroom/nebius-signs-new-ai-infrastructure-agreement-with-meta">Nebius</a>)</p><div><hr></div><p>Have a great weekend!</p>]]></content:encoded></item><item><title><![CDATA[Beyond GTC: A Deep Dive into Compute, LPX, and the Untold Story of SpecDec]]></title><description><![CDATA[Analyzing the CPU, GPU, and LPU chip ratios unveiled at the Nvidia GTC keynote, the impact of the Groq LPX chip on disaggregated decoding, and its potential for speculative decoding in AI inference]]></description><link>https://www.viksnewsletter.com/p/beyond-gtc-a-deep-dive-into-compute-lpx</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/beyond-gtc-a-deep-dive-into-compute-lpx</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Wed, 18 Mar 2026 16:19:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ROq5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Nvidia GTC keynote was a much anticipated event. <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;id&quot;:8066776,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;uuid&quot;:&quot;0bdc61af-5b84-464f-8dcb-1555715d7816&quot;}" data-component-name="MentionToDOM"></span> of <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Chipstrat&quot;,&quot;id&quot;:2003179,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/chipstrat&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;uuid&quot;:&quot;f3ba2ba6-1dd1-4322-bc97-788ce3f9f1c0&quot;}" data-component-name="MentionToDOM"></span> and I had a <a href="https://youtu.be/UfaIU6h0YdY">quick chat with our unfiltered takes</a> on the same day as GTC, on the Semi Doped podcast. Check it out.</p><p>With the introduction of the Groq LPX chips, Vera-Rubin has 7 different kinds of chips that can all be integrated into racks that comprise the entire system. The keynote highlighted where each of these racks go in the entire pod, which I color-coded as shown below. This pod architecture reveals quite a few interesting details which we will explore in this post.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iCXl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iCXl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 424w, https://substackcdn.com/image/fetch/$s_!iCXl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 848w, https://substackcdn.com/image/fetch/$s_!iCXl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 1272w, https://substackcdn.com/image/fetch/$s_!iCXl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iCXl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png" width="1456" height="632" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ed78a79-6864-4642-a003-a4ae39e99da6_1600x694.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:632,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iCXl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 424w, https://substackcdn.com/image/fetch/$s_!iCXl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 848w, https://substackcdn.com/image/fetch/$s_!iCXl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 1272w, https://substackcdn.com/image/fetch/$s_!iCXl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7f98e0-3ca5-4b89-9661-06e9066862be_1600x694.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Nvidia. Annotations by Vik&#8217;s Newsletter.</figcaption></figure></div><p>We will discuss:</p><ul><li><p>How CPU-GPU-LPU ratios are scaling</p></li><li><p>Understanding disaggregated decoding using LPX</p></li><li><p>The new frontier of inference speed unlocked by LPX</p></li></ul><p>After the paywall:</p><ul><li><p>What went unnoticed at GTC: Speculative decoding on LPX</p></li><li><p>Does VLIW really matter after all?</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Upgrade to a paid subscription to get full access to deep dives like one. You can expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div></li></ul><p>If you are not a paid subscriber, you can purchase just this article using the button below. You can find the whole catalog of articles for purchase <a href="https://www.semiexponent.com/products">at this link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/beyond-gtc-a-deep-dive-into-compute-lpx-and-the-untold-story-of-specdec&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/beyond-gtc-a-deep-dive-into-compute-lpx-and-the-untold-story-of-specdec"><span>Buy the ebook</span></a></p><div><hr></div><h3>Counting Compute: CPU-GPU-LPU Ratios</h3><p>The CPU utilization comes from two places: (1) 2 Vera CPUs per GPU tray, and (2) 8 Vera CPUs per CPU tray. From Jensen at GTC:</p><blockquote><p>We never thought we would be selling CPUs standalone. We are selling a lot of CPUs stand-alone. Already, for sure, [this is] going to be a multi-billion dollar business for us.</p></blockquote><p>Knowing the configurations of the xPU trays and construction of the superpod, let&#8217;s calculate how compute scales:</p><ul><li><p>2 Vera CPUs per 1U GPU tray, 18 GPU trays per rack, 16 racks = 576 CPUs.</p></li><li><p>8 Vera CPUs per stand-alone CPU tray. There are 32 CPU trays per rack, and two stand-alone CPU racks. That gives a total of 8 CPUs/tray &#215; 32 trays per rack &#215; 2 racks = 512 CPUs.</p></li></ul><p>Thus, the total estimated Vera CPUs per super pod is 576 + 512 = 1,088 CPUs. There are 4 Rubin GPUs per 1U compute tray, 18 trays per rack, 16 racks = 1,152 GPUs.</p><p>There is a third location where CPUs are needed. For every Groq tray containing 8 LPX chips, there is another CPU host that needs to be accounted for &#8211; the host CPU. The picture below shows the LPX rack on the right, with 32 compute trays per server, and ten such racks in the pod. So let&#8217;s add these chip counts in:</p><ul><li><p><strong>LPU</strong>: 8 LPUs/tray &#215; 32 trays/rack &#215; 10 racks = 2,560 LPUs</p></li><li><p><strong>CPU</strong>: 1 CPU/tray &#215; 32 trays/rack &#215; 10 racks = 320 CPUs</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mNiJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mNiJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 424w, https://substackcdn.com/image/fetch/$s_!mNiJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 848w, https://substackcdn.com/image/fetch/$s_!mNiJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!mNiJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mNiJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mNiJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 424w, https://substackcdn.com/image/fetch/$s_!mNiJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 848w, https://substackcdn.com/image/fetch/$s_!mNiJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!mNiJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4010cdfb-dacd-4926-aa17-f21f05c6ffe4_1536x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://x.com/BenBajarin/status/2033963670112510372?s=20">Ben Bajarin on X</a></figcaption></figure></div><p>Interestingly, the CPU in the LPX tray is <em>not </em>a Vera CPU because they likely need x86 chips from the GroqNode architecture that used dual socket AMD EPYC CPUs.</p><p>All in all, we have the following compute units now: <strong>1,152 GPUs, 1,408 CPUs, 2,560 LPUs</strong>. In our earlier discussion below about the rise of CPUs for agentic AI, we predicted that CPUs might exceed GPUs at the rack scale. Additional CPU racks are straightforward to deploy since they only need ethernet connectivity, not the high-bandwidth interconnects required for weight transfer.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;97a13b0e-c564-4acf-a91e-b14a63948e69&quot;,&quot;caption&quot;:&quot;For the better part of two years, CPUs have been an afterthought in AI infrastructure while GPUs got all the attention for training, and more recently inference. In the last 6 months, the rise of agentic AI is proving to be the &#8220;killer app&#8221; that AI inference needed.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The CPU Bottleneck in Agentic AI and Why Server CPUs Matter More Than Ever&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-17T06:01:31.028Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a3b3c30-89da-4ddc-876b-182e455e96e3_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188043920,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:72,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>A final detail that requires a mention is that the LPX compute tray contains an FPGA (likely Intel/Altera), whose main purpose is to provide the logic operations to expand the memory fabric to DRAM. The LPX supports up to 256 GB DRAM expansion via the FPGA, and up to another 128 GB via the host CPU DRAM. My guess is that the first tier of DRAM uses an FPGA to retain the memory bandwidth as much as possible when data is offloaded to it. Relying on the scheduler on the CPU host is almost always slower. It is not entirely clear if the FPGA also does the orchestration of Groq LPUs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kdBE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kdBE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 424w, https://substackcdn.com/image/fetch/$s_!kdBE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 848w, https://substackcdn.com/image/fetch/$s_!kdBE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 1272w, https://substackcdn.com/image/fetch/$s_!kdBE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kdBE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png" width="968" height="605" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:605,&quot;width&quot;:968,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kdBE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 424w, https://substackcdn.com/image/fetch/$s_!kdBE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 848w, https://substackcdn.com/image/fetch/$s_!kdBE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 1272w, https://substackcdn.com/image/fetch/$s_!kdBE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5eeca9c1-03c2-4582-ab37-ddddfd36363f_968x605.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Nvidia keynote at Mar 2026 GTC.</figcaption></figure></div><h3>Disaggregated Decoding Using LPX</h3><p>In a previous article we published pre-GTC, we discussed the implications of SRAM-based accelerators for decode operations. In the light of more Groq details, we can extend this discussion further.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b10766fc-d7d7-49f5-a50e-2c7377e82b8c&quot;,&quot;caption&quot;:&quot;A recent Wall Street Journal article stirred excitement about NVIDIA&#8217;s upcoming GTC 2026 announcements:&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;GTC 2026 Preview | Implications of Nvidia's SRAM-Decode Hardware on the Inference Market&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-03-04T10:36:44.098Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e50dfb9e-d3f3-4f9c-ab31-2bfa99c0c2e5_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/gtc-2026-preview-implications-of-sram-decode&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189861381,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:37,&quot;comment_count&quot;:8,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>At GTC, Jensen was specific on the use-case of SRAM accelerators for the feed-forward network operation of the decode process. In September 2025, the GDDR7-based Rubin CPX was announced for disaggregation of prefill, but that narrative has largely been lost with a lot of confusion if LPX has replaced it instead.</p><p>But this is comparing apples and oranges: the LPX is bandwidth optimized hardware for decode while the CPX was designed to be more FLOPs-heavy for prefill. The real question is if separate hardware like CPX is required for prefill, or if it can just be handled by the HBM-based Rubin. Now, the decode phase is disaggregated further.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ROq5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ROq5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 424w, https://substackcdn.com/image/fetch/$s_!ROq5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 848w, https://substackcdn.com/image/fetch/$s_!ROq5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 1272w, https://substackcdn.com/image/fetch/$s_!ROq5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ROq5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png" width="1456" height="751" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:751,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ROq5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 424w, https://substackcdn.com/image/fetch/$s_!ROq5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 848w, https://substackcdn.com/image/fetch/$s_!ROq5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 1272w, https://substackcdn.com/image/fetch/$s_!ROq5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0beb524c-6c1a-497c-bce4-bdf8b2498699_1600x825.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Nvidia (<a href="https://developer.nvidia.com/blog/inside-nvidia-groq-3-lpx-the-low-latency-inference-accelerator-for-the-nvidia-vera-rubin-platform/">link</a>)</figcaption></figure></div><p>The decode phase involves the calculation of attention, followed by the Feed-Forward Network (FFN). HBM-based Rubin GPUs are the best candidate for autoregressive attention calculation because the tokens are generated one after the other, in sequential fashion. The KV cache grows with input tokens and is best stored on HBM, and as a result, the demand for HBM will continue to stay.</p><p>The FFN calculation, on the other hand, is a giant matrix multiplication that lends itself to parallelism. The matrix can be sliced into smaller pieces across many chips (tensor parallelism), computed independently, and reassembled later. MoE models take this further because each expert can live on its own set of chips (model parallelism), so you don&#8217;t need one chip to hold everything.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pWCS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pWCS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 424w, https://substackcdn.com/image/fetch/$s_!pWCS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 848w, https://substackcdn.com/image/fetch/$s_!pWCS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 1272w, https://substackcdn.com/image/fetch/$s_!pWCS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pWCS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png" width="1456" height="811" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:811,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pWCS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 424w, https://substackcdn.com/image/fetch/$s_!pWCS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 848w, https://substackcdn.com/image/fetch/$s_!pWCS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 1272w, https://substackcdn.com/image/fetch/$s_!pWCS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab0387d-b489-42bf-b528-5200c5d5e3db_1600x891.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Nvidia GTC Keynote Mar 2026</figcaption></figure></div><p>The latter FFN/MoE phase is what the Groq LPX is meant to address. Nvidia has put in a high density of eight chips per LPU tray because each Groq 3 has only 500 MB of SRAM. So 2,560 LPUs in a super pod gives 1.28 TB of SRAM capacity at 150 TB/s memory bandwidth per chip (~384 PB/s at rack scale). That gives plenty of capacity to hold the FFN weights of a large model, which incidentally occupies <a href="https://www.viksnewsletter.com/p/a-primer-on-transformer-architecture">two-thirds of all weights</a> in a model.</p><p>The transfer of weights and KV cache between the attention calculation on Rubin GPU and the low-latency decode hardware on LPX is done with Spectrum-X ethernet, although Infiniband could be used as well presumably.</p><h3>A New Frontier of Inference Speed</h3><p>The addition of LPX racks unlocks a new token generation speed that was previously impossible with just GPUs alone. The graph below shows the token throughput on the y-axis - a measure of how many tokens/sec can be generated (normalized to power), versus interactivity on the x-axis - a measure of the user experiences speed in terms of tokens/sec available per user. Ideally, we want to be on the top right of the chart, but this is usually not possible due to simultaneous compute and memory bandwidth limitations.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eSwL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eSwL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 424w, https://substackcdn.com/image/fetch/$s_!eSwL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 848w, https://substackcdn.com/image/fetch/$s_!eSwL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 1272w, https://substackcdn.com/image/fetch/$s_!eSwL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eSwL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png" width="1456" height="784" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:784,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eSwL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 424w, https://substackcdn.com/image/fetch/$s_!eSwL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 848w, https://substackcdn.com/image/fetch/$s_!eSwL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 1272w, https://substackcdn.com/image/fetch/$s_!eSwL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5bfdc67-de2e-420a-a923-cec36abe7e9c_2048x1103.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Nvidia GTC Keynote Mar 2026</figcaption></figure></div><p>The introduction of LPX pushes the frontier of what is possible in terms of inference speed, with an advertised 1,000+ tokens/sec/user. This is why the licensing of Groq technology was a critical piece of technology necessary for Nvidia; one that they did not have with their GPU family of chips. Most importantly, Jensen explains how to deploy AI infrastructure based on expected workloads:</p><ul><li><p>For a lot of high throughput activity that does not require low latency &#8594; stick to Vera-Rubin NVL72</p></li><li><p>For low-latency, &#8220;time is money&#8221; kind of applications like coding and engineering &#8594; deploy up to 25% of LPX along with Vera-Rubin.</p></li></ul><p>In this context, it is not clear what 25% means: number of chips? Total memory? A better metric is to understand how many LPX racks should be deployed to set a desired interactivity level.</p><h3>Speculative Decoding on LPX</h3><p>To be honest, I was expecting Nvidia&#8217;s announcement at GTC to be centered around the acceleration of speculative decoding (SpecDec). Instead, it was more of a plain vanilla attention/FFN disaggregation. This is likely because SpecDec is an inference technique crossing from research to early production. Meta, Groq, and a few other labs have looked at it in detail, but it is far from standard infrastructure at the moment. However, I do believe that LPX has massive benefits for SpecDec which was completely unaddressed in the keynote.</p><p>After the paywall, we will discuss what Nvidia did not:</p><ul><li><p>What went unnoticed at GTC: Speculative decoding on LPX</p></li><li><p>Does VLIW really matter after all?</p></li></ul>
      <p>
          <a href="https://www.viksnewsletter.com/p/beyond-gtc-a-deep-dive-into-compute-lpx">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: MTIA x4, AVGO 400G, UMC, Nebius, SiPho Readiness++ ]]></title><description><![CDATA[This Week in Chips: Key developments across the semiconductor and adjacent universes.]]></description><link>https://www.viksnewsletter.com/p/twic-mtia-x4-avgo-400g-umc-nebius</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-mtia-x4-avgo-400g-umc-nebius</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 13 Mar 2026 12:31:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!eO0p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here is some content from this week in case you&#8217;ve been buried at work, and didn&#8217;t get a chance to look it over.</p><ul><li><p><strong>Substack</strong>: <a href="https://www.viksnewsletter.com/p/arista-ocs-threat">Why Optical Circuit Switching is Arista Networks&#8217; Long-Term Problem</a></p></li><li><p><strong>Semi Doped Podcast</strong>: <a href="https://youtu.be/47cQTPjDUB8?si=XNe0T2Ke4TJfuzm0">The Great Optics-Copper Crossroads</a></p></li></ul><p>Let&#8217;s get to the news. Lots of exciting stuff.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>I like to keep news roundups free, but they still take effort to compile and interpret. Please consider upgrading to a paid sub. You can expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>META: Four MTIA Chips in Two Years</h4><p>Meta recently announced four Meta Training and Inference Accelerator (MTIA) chips that they are already using in deployment, or will roll out within the 2026-27 timeframe. These are chips developed in partnership with Broadcom (their custom accelerator business is looking pretty good), and is part of <a href="https://engineering.fb.com/2025/09/29/data-infrastructure/metas-infrastructure-evolution-and-the-advent-of-ai/">Meta&#8217;s strategy</a> to vertically integrate their chip technology without relying on external supply from Nvidia, which allows them to save on infrastructure spend. Here is a quick rundown of what each chip is.</p><ul><li><p><strong>MTIA 300</strong>: Used already in ranking and recommendation (R&amp;R) &#8594; 1 compute chips, 2 network chips, HBM stacks</p></li><li><p><strong>MTIA 400</strong>: Next gen R&amp;R chip + GenAI &#8594; 2 compute chips, 2 network chips, 1 PCIe chiplet, more HBM stacks and bandwidth</p></li><li><p><strong>MTIA 450</strong>: GenAI optimized &#8594; 2 compute chips but faster, 2 network chips, 1 PCIe chiplet, 2x HBM bandwidth over MTIA 400</p></li><li><p> <strong>MTIA 500</strong>: GenAI beast &#8594; 4 compute chips (in 2x2), 2 network chips, 1 PCIe chiplet, more HBM capacity, 50% faster HBM bandwidth over MTIA 450</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eEQl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eEQl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 424w, https://substackcdn.com/image/fetch/$s_!eEQl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 848w, https://substackcdn.com/image/fetch/$s_!eEQl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 1272w, https://substackcdn.com/image/fetch/$s_!eEQl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eEQl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png" width="542" height="582" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:582,&quot;width&quot;:542,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:108261,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/190636787?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eEQl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 424w, https://substackcdn.com/image/fetch/$s_!eEQl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 848w, https://substackcdn.com/image/fetch/$s_!eEQl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 1272w, https://substackcdn.com/image/fetch/$s_!eEQl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdeae3e-4549-43a4-8b9e-f3a09d35f79b_542x582.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Block diagram of the MTIA 500 chip. Source: Meta</figcaption></figure></div><p>Meta&#8217;s approach to AI hardware is to build systems composed of smaller parts, like chiplets, that can be individually upgraded in quick succession as model architectures and numeric formats evolve. Meta is clear that they are adopting an &#8220;inference-first&#8221; approach to meet the growing GenAI inference demand.</p><p>Finally, MTIA uses a PyTorch-native approach to programming models which is a way of saying that models running on Nvidia hardware (using PyTorch on top of CUDA) can easily be ported over to MTIA with no rewriting. This allows Nvidia and MTIA hardware to work alongside each other with minimal friction.</p><p>(via <a href="https://ai.meta.com/blog/meta-mtia-scale-ai-chips-for-billions/">Meta</a>)</p><h4>AVGO: Broadcom delivers Industry&#8217;s First 400G/lane Optical DSP for Next-Generation AI Networks</h4><p>The Taurus BCM83640 is a 400G/lane optical DSP that builds towards 3.2T networking technology, especially when the TomaHawk 7 chip with 204.8 Tbps switch bandwidth is released. I particularly like the figure below that explains how 400G/lane is useful with both 1.6T and 3.2T optical technology.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eO0p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eO0p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 424w, https://substackcdn.com/image/fetch/$s_!eO0p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 848w, https://substackcdn.com/image/fetch/$s_!eO0p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!eO0p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eO0p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg" width="1170" height="644" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:644,&quot;width&quot;:1170,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eO0p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 424w, https://substackcdn.com/image/fetch/$s_!eO0p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 848w, https://substackcdn.com/image/fetch/$s_!eO0p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!eO0p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4bf6a2-e411-4065-9fd9-84fa1e7eda38_1170x644.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Broadcom</figcaption></figure></div><p>With the 1.6T generation, the optical DSP has a 8-lane to 4-lane gearbox that now only requires 4-lanes running at 400G/lane in a transceiver. With the transition to 3.2T, all 8 lanes run at 400G - no gearboxing required. According to Broadcom&#8217;s <a href="https://www.broadcom.com/products/ethernet-connectivity/phy-and-poe/optical/bcm83640">product page</a>:</p><blockquote><p>The BCM83640-DIE line (optical) transmitter supports multiple advanced 400G/lane laser optics, including a <strong>thin-film lithium niobate (TFLN) modulator</strong>, <strong>differential electro-absorption modulated lasers</strong> (D-EMLs), and <strong>advanced highbandwidth silicon photonics (SiPh) modulators</strong>.</p></blockquote><p>It&#8217;s not clear to me why there is both a TFLN modulator and an electro-absorption modulator, but clearly TFLN&#8217;s high modulation bandwidth is very attractive in 400G networking. This DSP is built on TSMC 3nm and still uses IMDD for DR4 and FR4 reach. It is interesting to see when Coherent Lite will make an entry.</p><p>(via <a href="https://www.broadcom.com/blog/400g-lane-optical-solutions-pave-the-path-toward-200t">Broadcom</a>)</p><h4>UMC, HyperLight Team up to Mass-Produce TFLN Chiplets, Targeting 1.6T Data Center Bandwidth</h4><p>Thin Film Lithium Niobate (TFLN) is going to become increasingly important for 1.6T networking generations and beyond because of the extreme modulation bandwidths possible. While the best InP EMLs for 200G/lane have a bandwidth of 60-67 GHz, TFLN modulators from Hyperlight exceed 110GHz. This is important for 400G/lane modulators, and is the same reason Broadcom&#8217;s recently announced 400G optical DSP uses TFLN (and D-EML?).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fdXo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fdXo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 424w, https://substackcdn.com/image/fetch/$s_!fdXo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 848w, https://substackcdn.com/image/fetch/$s_!fdXo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 1272w, https://substackcdn.com/image/fetch/$s_!fdXo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fdXo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png" width="488" height="378" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:378,&quot;width&quot;:488,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:204070,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/190636787?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fdXo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 424w, https://substackcdn.com/image/fetch/$s_!fdXo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 848w, https://substackcdn.com/image/fetch/$s_!fdXo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 1272w, https://substackcdn.com/image/fetch/$s_!fdXo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e61f73c-76e3-4ee9-864e-f3c004e2f955_488x378.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Hyperlight</figcaption></figure></div><p>Hyperlight has a long standing relationship with Wavetek, and has ramped up HVM production of TFLN on 6&#8221; wafers. Wavetek is a fully owned subsidiary of United Microelectronics Corporation (UMC) - a major pure play silicon foundry. With parent company UMC getting into the mix, they are now looking to scale production to 8&#8221; wafers. In contrast, InP is still stuck on 3&#8221; and all talk of 6&#8221; wafers leads directly to supposed yield problems. TFLN on 8&#8221; sounds pretty good, and if yields/cost is good, are they a potential candidate for 200G applications in the face of InP shortages? Maybe some optics experts can weigh in, in the comments section.</p><p>(via <a href="https://www.trendforce.com/news/2026/03/12/news-umc-hyperlight-team-up-to-mass-produce-tfln-chiplets-targeting-1-6t-data-center-bandwidth/">TrendForce</a>)</p><h4>NVIDIA and Nebius partner to scale full-stack AI cloud</h4><p>From the press release:</p><blockquote><p>NVIDIA will invest $2 billion in Nebius, reflecting NVIDIA&#8217;s confidence in Nebius&#8217;s business and unique depth of engineering expertise across the full AI technology stack.</p><p>To enable Nebius to deploy more than 5 gigawatts of capacity by end of 2030, NVIDIA will support Nebius&#8217;s early adoption of the latest generation of NVIDIA&#8217;s accelerated computing platform.</p></blockquote><p>The industry sentiment around this is generally positive because it provides Nebius early access to Vera Rubin hardware, additional cash flow and a positive endorsement from Nvidia whose $2B investment signals that they believe in Nebius&#8217; future as an AI infrastructure provider. This is line with other investments Nvidia has made with Lumentum, Coherent, and CoreWeave recently. </p><p>The circular nature of deals made is a concern to many investors: Nvidia invests in Nebius, Nebius buys Nvidia hardware. The other point of concern is the growing debt Nebius is taking on &#8212; from nearly zero debt in 2024, to a total liability of $5.3B by the end of 2025. Nvidia&#8217;s $2B will help offset this, but the concern is that debt will grow as Nebius builds out towards 5GW of capacity.</p><p>(via <a href="https://nebius.com/newsroom/nvidia-and-nebius-partner-to-scale-full-stack-ai-cloud">Nebius</a>)</p><h4>Ayar Labs CEO Mark Wade: &#8220;Mass production of SiPh still faces many challenges&#8221;</h4><p>In a DigiTimes exclusive interview, Ayar Labs CEO Mark Wade stated that the silicon photonics as a technology is ready, but mass producing it at scale is a difficult problem. I particularly like Wade&#8217;s response when asked what he thinks of the transition to optical in light of Hock Tan&#8217;s comments that copper is still relevant (emphasis mine):</p><blockquote><p>SerDes and electronic transmission remain the dominant technologies today. Considering that many companies must comment on developments over the next few quarters, their statements are understandable. Our focus, however, is on what will happen in 2028, 2029, and 2030.</p><p><strong>The SiPh supply chain&#8217;s mass production capability is currently not fully ready.</strong> But the pace of improvement in manufacturing readiness is accelerating. Together with our partners, we are actively ensuring the supply chain gradually comes into place to achieve our production targets.</p><p>I also agree that we will not wake up one day to find copper interconnects have completely disappeared. The transition will be gradual. <strong>Around 2028, GPUs and accelerators will begin adopting optical interconnects, and more AI infrastructure will rapidly shift toward optical communications.</strong> Statements from companies like Broadcom or Nvidia about copper remaining in use simply highlight that mass production of SiPh still faces many challenges.</p></blockquote><p>(via <a href="https://www.digitimes.com/news/a20260310PD202/siph-startup-funding-production-taiwan.html">DigiTimes</a>)</p><h4>eXtra-Dense Pluggable Optics (XPO): Liquid Cooled Insanity</h4><p>A single sentence on Arista&#8217;s blog caught my attention.</p><blockquote><p>XPO density is a game changer. A single XPO module replaces 8 OSFP modules.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fj0B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fj0B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 424w, https://substackcdn.com/image/fetch/$s_!fj0B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 848w, https://substackcdn.com/image/fetch/$s_!fj0B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 1272w, https://substackcdn.com/image/fetch/$s_!fj0B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fj0B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png" width="940" height="432" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:432,&quot;width&quot;:940,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;COMPARISON for ANDY2 (1)&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="COMPARISON for ANDY2 (1)" title="COMPARISON for ANDY2 (1)" srcset="https://substackcdn.com/image/fetch/$s_!fj0B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 424w, https://substackcdn.com/image/fetch/$s_!fj0B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 848w, https://substackcdn.com/image/fetch/$s_!fj0B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 1272w, https://substackcdn.com/image/fetch/$s_!fj0B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532f947e-7600-4823-a2fc-bfa911d72c90_940x432.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Arista</figcaption></figure></div><p>The 8 OSFP connectors you see on the right are pluggables that go into ports in the optical switch. Now, the XPO module form factor provides 12.8 Tbps in a single plug, and it needs liquid cooling too. If you need a better introduction to what all the optical terminology means, <a href="https://www.viksnewsletter.com/p/a-complete-guide-to-optical-transceiver-nomenclature?utm_source=publication-search">check out my earlier post</a>.</p><p>Arista goes on to explain the benefit XPO brings quite nicely.</p><blockquote><p>In short, XPO allows customers to build large AI data centers with one quarter the switch racks. This is hugely important for both scale-up and scale-out applications, where without XPO the number of traditional switch racks would exceed the number of GPU racks.</p><p>Imagine a 400 MW AI datacenter with 1024 GPU racks of 128 GPUs each for a total of 128,000 GPUs. Assume 12.8T scale-up and 1.6T scale-out bandwidth per GPU. With OSFP switch racks that have a density of 1.6 Pbps per rack, this would require more than 1400 switch racks for scale-up and scale-out fabrics. With XPO, this would <strong>require 75% fewer racks, saving over 1050 racks or 44 % of the floor space.</strong></p></blockquote><p>More speed, fewer racks is good. Pretty sure you&#8217;ll see this at their OFC booth if you happen to stop by.</p><p>(via <a href="https://blogs.arista.com/blog/ai-datacenters-are-reshaping-the-optics-industry">Arista blog</a>)</p><p>Have a great weekend! NVIDIA GTC and OFC 2026 are both next week. Prepare for an explosion of new information! &#128165;</p>]]></content:encoded></item><item><title><![CDATA[Why Optical Circuit Switching is Arista Networks’ Long-Term Problem]]></title><description><![CDATA[OCS is not a near term danger to Arista but can pose a structural threat to Arista&#8217;s &#8220;blue-box&#8221; networking moat in the long run.]]></description><link>https://www.viksnewsletter.com/p/arista-ocs-threat</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/arista-ocs-threat</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Wed, 11 Mar 2026 10:21:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cc4cdc11-4865-4335-a232-222b14fd3a06_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In last week&#8217;s <a href="https://www.youtube.com/@SemiDoped">Semi Doped Podcast</a>, I proposed a &#8220;4D chess move&#8221; that I thought Hock Tan, CEO of Broadcom $AVGO, was playing. Why would a CEO whose company has a strong history of products in Co-Packaged Optics (CPO) go out of his way to play up copper interconnects on the most recent earnings call?</p><p>The hypothesis went something like this: if networking went to CPO, then Broadcom&#8217;s switching silicon business (even with CPO enabled) is open to disruption by optics-native solutions like Optical Circuit Switches (OCS). <strong>Copper staying alive keeps the game on Broadcom&#8217;s turf</strong>. This has implications further down the stack for companies like Arista Networks <span class="cashtag-wrap" data-attrs="{&quot;symbol&quot;:&quot;$ANET&quot;}" data-component-name="CashtagToDOM"></span>, who buy Broadcom&#8217;s Tomahawk series switches for their &#8220;blue-box&#8221; networking solutions, then layer their proprietary software (EOS) on top.</p><p>Later in the week, FundaAI mirrored a similar sentiment in their <a href="https://fundaai.substack.com/p/researchcomment-on-hock-tans-view">weekly research note</a>:</p><blockquote><p>If NVIDIA&#8217;s ecosystem were to move scale-out networking toward CPO at a large scale, it would likely be negative for Broadcom. On one hand, it could allow NVIDIA to pull more of the switching and optical stack into its own system architecture, weakening the role of merchant switches such as the Tomahawk family. On the other hand, widespread CPO or NPO deployment would structurally reduce the value of traditional pluggable optics, DSPs, and retimers &#8212; segments that are part of Broadcom&#8217;s existing profit pool.</p></blockquote><p>FundaAI argues that the impact of near-term CPO deployment goes beyond just Tomahawk switches for Broadcom and affects a whole swath of other products geared towards faster copper interconnects such as DSPs, retimers, and pluggable optics. This directionally extends my original displacement theory.</p><p>In this post, we go beyond just the copper vs. optics interconnect discussion and look at a higher tier of the AI networking infrastructure. Specifically, we address whether OCS poses a structural threat to blue-box networking solutions like Arista&#8217;s, which are especially poised for an explosive growth period in 2026.</p><div><hr></div><p>For free subscribers:</p><ul><li><p><strong>The Structural Threat:</strong> Introduction to Optical Circuit Switching (OCS) as a long-term threat to Arista Networks&#8217; proprietary &#8220;blue-box&#8221; networking solutions.</p></li><li><p><strong>Arista&#8217;s Moat:</strong> How its software prowess as a hardware company creates a moat</p></li><li><p><strong>Rise and deployment of OCS:</strong> Details on how OCS works and its advantages. How Google successfully deployed OCS, achieving 30% capex and 41% power reduction.</p></li></ul><p>For paid subscribers:</p><ul><li><p><strong>Protecting the Moat:</strong> Analysis of why OCS cannot easily slot into a spine switch&#8217;s role yet.</p></li><li><p><strong>Near-Term Outlook/Long-Term Contraction:</strong> Discussion of Arista&#8217;s strong near-term growth driven by AI accelerator deployments, and the real threat to Arista&#8217;s high-margin spine switch business from OCS.</p></li><li><p><strong>Future Scenarios:</strong> Three possible ways optics could play out for Arista.</p></li><li><p><strong>Sell-Side Critique:</strong> Commentary that underplays the significance of the OCS threat.</p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Upgrade to a paid subscription to get full access to deep dives like one. You can expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you are not a paid subscriber, you can purchase just this article using the button below. You can find the whole catalog of articles for purchase <a href="https://www.semiexponent.com/products">at this link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/why-optical-circuit-switching-is-arista-networks-long-term-problem&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/why-optical-circuit-switching-is-arista-networks-long-term-problem"><span>Buy the ebook</span></a></p><div><hr></div><h3>Arista&#8217;s Software Moat</h3><p>Arista is a networking company that buys merchant silicon from Broadcom, layers its Extensible Operating System (EOS) on top, and works with contract manufacturers like Jabil, Sanmina, and Foxconn to build the networking switch hardware that goes into datacenter racks. The important thing to note is that Arista designs no silicon. They differentiate entirely on software.</p><p>Arista&#8217;s EOS software is the moat &#8212; but it is a <strong>packet switching</strong> moat: one that is most prone to be displaced by optical circuit switching (OCS) in the medium to long term. Every piece of data flowing through an Arista switch gets broken into packets, and the silicon inspects each one to decide where it goes. At scale, this creates enormous software complexity: congestion control, load balancing, real-time telemetry across hundreds of ports. That complexity is what Arista is really good at, and what customers pay a premium for. Arista&#8217;s &#8220;blue-box&#8221; approach packages a single software platform (EOS) that works across its entire product portfolio. This is an important moat because customers can deploy network solutions at scale with minimal software engineering effort. The added software layer makes bluebox switches costlier, but turnkey.</p><p>On the other hand, companies like Edgecore and Celestica provide &#8220;white-box&#8221; networking solutions: the networking hardware with only a primitive software layer that supports open-source network operating systems like <a href="https://sonicfoundation.dev/">SONiC</a>, leaving the software implementation to the user. White box solutions are substantially cheaper and allow customers to bring their own software if they already have it, or have the means to develop it. Meta, for example, runs their FBOSS software on whitebox hardware. It also avoids vendor lock-in.</p><p>Arista is essentially a software company with low capital expenditure, and the business is doing great with the recent earnings call guided $3.25B in AI networking revenue for 2026, doubling YoY. Customer concentration is high, with 42% of revenue coming from just two major hyperscalers (likely Meta, Microsoft). But when it comes to Arista, there are a few important things to note:</p><ol><li><p>Their spine switches are the high ASP, high margin product line. The EOS software is most useful when the complexity is high, like in spine switches that have &gt;500 ports.</p></li><li><p>The rack switch needs to be updated for every networking generation because it relies on the capabilities of the underlying silicon inside. For example, Broadcom&#8217;s Tomahawk 6 supports a total data bandwidth of 102.4Tbps, while the upcoming Tomahawk 7 is expected to double that.</p></li><li><p>The price premium of Arista switches becomes less defensible at lower network layers. For example, Meta is one of Arista&#8217;s biggest customers while also having their in-house FBOSS networking solution. Meta likely uses a hybrid white/blue-box approach where they use lower cost switches in the leaf/top-of-rack layer, but blue-box for spine switches.</p></li></ol><p>If you need an introduction to leaf/spine and networking/interconnect architecture, check out an <a href="https://www.viksnewsletter.com/i/173009699/leaf-spine-and-core-switches">earlier deep-dive</a>. If you want to go deeper into Arista&#8217;s business model, Chipstrat has a <a href="https://open.substack.com/pub/chipstrat/p/aristas-3-billion-ai-bet-and-the?utm_campaign=post-expanded-share&amp;utm_medium=post%20viewer">nice explanation</a>.</p><h3>The Rise of Optical Circuit Switching</h3><p>Optical circuit switches do not deal with data packets in the electrical domain. Instead, they simply redirect light from the input port to the output power using a variety of methods such as MEMS mirrors (Google/Lumentum), liquid crystals (Coherent), silicon photonics (iPronics), piezoelectric actuators (Polatis) or even robotic manipulators (Telescent). Think of OCS as a way to simply steer light in the right direction, and with talk of the emergence of CPO, it allows the entire routing domain to be purely optical while the conversion to electrical signals happens right at the silicon level with the co-packaged optical engine.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!d8D8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!d8D8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 424w, https://substackcdn.com/image/fetch/$s_!d8D8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 848w, https://substackcdn.com/image/fetch/$s_!d8D8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 1272w, https://substackcdn.com/image/fetch/$s_!d8D8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!d8D8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png" width="1456" height="877" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:877,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!d8D8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 424w, https://substackcdn.com/image/fetch/$s_!d8D8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 848w, https://substackcdn.com/image/fetch/$s_!d8D8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 1272w, https://substackcdn.com/image/fetch/$s_!d8D8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aea6daa-2d6d-4107-9133-0f35d841d02d_2048x1234.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: FutureWei Technologies</figcaption></figure></div><p>One major advantage of an OCS switch is that it will support multiple generations of interconnect speeds like 800G, 1.6T, 3.2T, and so on. The reason is that only light is being rerouted, and the OCS does not really care how fast the signal encoding is. This inherently makes OCS future proof compared to silicon switches that need upgrading with every networking generation.</p><p>Google was the earliest to deploy OCS in their Jupiter network architecture of which the OCS part was called Project Apollo. Their early deployments were based on in-house MEMS OCS switches, but of late Google has been switching to commercial MEMS OCS providers like Lumentum. Lumentum&#8217;s last earnings call stated a $400M order backlog in OCS hardware that was net-new to the company, from three different customers. Coherent is also increasing customer engagements with a growing backlog of OCS QoQ.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CnQt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CnQt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 424w, https://substackcdn.com/image/fetch/$s_!CnQt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 848w, https://substackcdn.com/image/fetch/$s_!CnQt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 1272w, https://substackcdn.com/image/fetch/$s_!CnQt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CnQt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png" width="1208" height="676" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:676,&quot;width&quot;:1208,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CnQt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 424w, https://substackcdn.com/image/fetch/$s_!CnQt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 848w, https://substackcdn.com/image/fetch/$s_!CnQt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 1272w, https://substackcdn.com/image/fetch/$s_!CnQt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc43f8db-9b93-487b-b6ae-f5b1db88ea05_1208x676.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Coherent</figcaption></figure></div><p>The reason Google is an early adopter is their <a href="https://research.google/pubs/jupiter-evolving-transforming-googles-datacenter-network-via-optical-circuit-switches-and-software-defined-networking/">key early insight in 2022</a> was that traffic patterns on long timescales are quite predictable and traditional leaf-spine architectures are typically overkill. Because traffic is stable, Google could get away with the downsides of OCS technology because network reconfigurations are rarely required. And when actually necessary, the switching speed from the network switch need not be as fast as what silicon offers, which is good for OCS because many of the technologies currently used are slower. Google&#8217;s software prowess and unique network architectures have enabled them to steadily deploy OCS their spine network, including in 3D torus configurations of TPU clusters.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fG74!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fG74!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 424w, https://substackcdn.com/image/fetch/$s_!fG74!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 848w, https://substackcdn.com/image/fetch/$s_!fG74!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 1272w, https://substackcdn.com/image/fetch/$s_!fG74!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fG74!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png" width="1213" height="686" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:686,&quot;width&quot;:1213,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fG74!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 424w, https://substackcdn.com/image/fetch/$s_!fG74!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 848w, https://substackcdn.com/image/fetch/$s_!fG74!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 1272w, https://substackcdn.com/image/fetch/$s_!fG74!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc40e238-384a-4ce8-b48e-2ed9d5972cd5_1213x686.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Telescent</figcaption></figure></div><p>Although Google is currently using OCS in production, adoption in large scale deployments does not go much further than that at present. The other customers in Lumentum&#8217;s $400M OCS backlog could very well be Microsoft, Nvidia, or Meta, given their recent participation in Open Compute Project&#8217;s <a href="https://www.opencompute.org/blog/the-open-compute-project-announces-new-optical-circuit-switching-ocs-project">recently announced</a> OCS project. Uses for OCS goes beyond spine switches too &#8211; with applications emerging in backup pooling, scale out, and when everything goes optical, possibly even scale-up. But by all practical measures, OCS is still in its early days and there are still challenges to be solved.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pmFA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pmFA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 424w, https://substackcdn.com/image/fetch/$s_!pmFA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 848w, https://substackcdn.com/image/fetch/$s_!pmFA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 1272w, https://substackcdn.com/image/fetch/$s_!pmFA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pmFA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png" width="1456" height="872" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:872,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pmFA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 424w, https://substackcdn.com/image/fetch/$s_!pmFA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 848w, https://substackcdn.com/image/fetch/$s_!pmFA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 1272w, https://substackcdn.com/image/fetch/$s_!pmFA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5856f565-0748-426c-b2ac-92649431a7d7_2048x1226.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: FutureWei Technologies</figcaption></figure></div><p><strong>After the paywall:</strong> </p><ul><li><p><strong>Protecting the Moat:</strong> Analysis of why OCS cannot easily slot into a spine switch&#8217;s role yet.</p></li><li><p><strong>Near-Term Outlook/Long-Term Contraction:</strong> Discussion of Arista&#8217;s strong near-term growth driven by AI accelerator deployments, and the real threat to Arista&#8217;s high-margin spine switch business from OCS.</p></li><li><p><strong>Future Scenarios:</strong> Three possible ways optics could play out for Arista.</p></li><li><p><strong>Sell-Side Critique:</strong> Commentary that underplays the significance of the OCS threat.</p></li></ul>
      <p>
          <a href="https://www.viksnewsletter.com/p/arista-ocs-threat">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: Nvidia $2B, The Hock Shock, Credo's A(E/L)C, Ayar Series E, Intel Changes, Memory]]></title><description><![CDATA[This Week in Chips: Key developments across the semiconductor and adjacent universes.]]></description><link>https://www.viksnewsletter.com/p/twic-nvidia-2b-the-hock-shock-credos</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-nvidia-2b-the-hock-shock-credos</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 06 Mar 2026 12:32:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ltM7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here are the results of last week&#8217;s poll where I asked if I should continue this column. It seems like most of you like it, so let&#8217;s keep going.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NaNq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NaNq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 424w, https://substackcdn.com/image/fetch/$s_!NaNq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 848w, https://substackcdn.com/image/fetch/$s_!NaNq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 1272w, https://substackcdn.com/image/fetch/$s_!NaNq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NaNq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png" width="569" height="290.5531914893617" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d58633b2-19d5-4038-bc42-84585c00a793_1222x624.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:624,&quot;width&quot;:1222,&quot;resizeWidth&quot;:569,&quot;bytes&quot;:68577,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/189940174?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NaNq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 424w, https://substackcdn.com/image/fetch/$s_!NaNq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 848w, https://substackcdn.com/image/fetch/$s_!NaNq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 1272w, https://substackcdn.com/image/fetch/$s_!NaNq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd58633b2-19d5-4038-bc42-84585c00a793_1222x624.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s some content from the past week in case you missed it.</p><ul><li><p><strong>Substack</strong>: <a href="https://www.viksnewsletter.com/p/gtc-2026-preview-implications-of-sram-decode">GTC 2026 Preview | Implications of Nvidia&#8217;s SRAM-Decode Hardware on the Inference Market</a></p></li><li><p><strong>Semi Doped Podcast</strong>: <a href="https://youtu.be/VcRfZKofBGo?si=U3kIGnOOLoz2zuTD">Optical Supply Chain: What would you buy?</a>  (also available on all podcast streaming platforms). We have a podcast being edited about the copper vs optical debate this week, which should be out shortly.</p></li></ul><p>Lots to chatter about today. I&#8217;m experimenting with a slightly shorter format inspired by <a href="https://www.construction-physics.com/">Construction Physics</a>&#8217; reading list posts. Let me know how you like it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>This publication is only possible because of your support. Please consider upgrading to a paid sub. You can expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4>Nvidia&#8217;s $2B Optics Investment</h4><p>Nvidia has invested $2B each into Lumentum and Coherent which are primary suppliers of lasers for datacenter topics. This investment is bullish for optics and is a direct signal that AI hardware makers are taking optics seriously. A lot of people assumed that this investment is to guarantee lasers for scale up optics, which has been all the talk of town ever since Lumentum&#8217;s last earnings call. In reality, Nvidia might be securing their CW laser supply for CPO or even optical circuit switches (OCS) for which Lumentum reported that they have a $400M backlog. Coherent serves as a great second source for all this.</p><p>(via <a href="https://www.nextplatform.com/connect/2026/03/02/nvidia-sees-the-light-on-silicon-photonics-and-maybe-optical-switching/4093099">The Next Platform</a>)</p><h4>Broadcom Call: Copper vs Optics</h4><p>The <span class="cashtag-wrap" data-attrs="{&quot;symbol&quot;:&quot;$AVGO&quot;}" data-component-name="CashtagToDOM"></span> earnings call was bullish with lots of revenue from AI semiconductors, but Hock completely stirred the hornet&#8217;s nest with this statement.</p><blockquote><p>But I&#8217;m talking about scaling up in a rack, in a cluster domain. You really want to use direct attached copper as long as you can. And we are still based on our technology that Broadcom has with -- especially on connecting XPU to XPU or even GPU to GPU, we can do it with copper, and we can push the envelope from 100G to 200G to even 400G.<strong> We have SerDes now running 400G that can drive distance on a rack to run copper. </strong>Well, all I&#8217;m trying to say is<strong> you don&#8217;t need to go run into some bright shiny objects called CPO, even as we are the lead in CPOs. CPOs will come in its time, not this year, maybe not next year, but in its time.</strong></p></blockquote><p>While there has been so much news related to scale up optics, Hock&#8217;s view that copper will stay prevalent for scale up is a contrarian one. The claim that they have lab results at 400G/lane implies that copper scale up will continue into the 3.2T networking generation. The dissonance of large laser orders and the insistence of copper for scale up has confused many (including me). Maybe both copper and optics will coexist for a long time. I will plan on a deep-dive to investigate further.</p><p>(via <a href="https://investors.broadcom.com/events/event-details/q1-2026-broadcom-earnings-conference-call">Broadcom</a>)</p><h4>Credo Emphasizes Reliability and Long Life for AEC</h4><p><span class="cashtag-wrap" data-attrs="{&quot;symbol&quot;:&quot;$CRDO&quot;}" data-component-name="CashtagToDOM"></span>&#8217;s recent earnings call showed strong numbers from the specialist cable company. They doubled revenue from 2024 to 2025, and are expected to triple revenue in 2026. That&#8217;s 6x growth in two years. Following the money shows the copper is well and alive. They claim that 200G AEC is ready and has a reach of 5m. Their foray into optics is bring their &#8220;Zero Flap&#8221; system of link health and telemetry for scale out optics cables which is still the dominant method of hooking up racks. They claim that their Active LED cable (ALC) will fill the gap between copper and optics, with the cost and energy efficiency similar to copper, but with reach extending to 10m short-term, and eventually 30m.</p><p>(via <a href="https://credosemi.com/">Credo</a>)</p><h4>Micro-LED co-packaged optics cut power consumption to just 5% that of copper cables</h4><p>TrendForce reports that micro-LED co-packaged optics (CPOs) achieve energy consumption of just 1-2 picojoules per bit compared to copper&#8217;s 10+ pJ/bit. The architecture integrates sub-50 micrometer micro-LED chips with CMOS driver circuits, delivering integration density exceeding 0.5 Tbps per square millimeter and supporting 800 Gbps and 1.6 Tbps transmission standards. Taiwanese manufacturers AUO, Innolux, and PlayNitride are building vertically integrated micro-LED production capabilities, positioning themselves as key suppliers for this emerging technology.</p><p>(via <a href="https://www.semiconductor-today.com/news_items/2026/mar/trendforce-040326.shtml">semiconductor-today</a>)</p><h4>Ayar Labs Closes $500M Funding Round Backed by Nvidia and AMD</h4><p>Ayar Labs raised $500 million in a Series E round led by Neuberger Berman. The round values the co-packaged optics startup at $3.75 billion and brings total funding to $870 million. The company plans to accelerate development of CPO using the new funds to scale high-volume production and test capacity, and expand global operations. Their TeraPhy UCIe compliant solution is an optical I/O chiplet that can be dropped in next to a GPU or switch ASIC, enabling optical GPU-GPU or GPU-Switch links for AI scale up. </p><p>(via <a href="https://ayarlabs.com/news/ayar-labs-closes-500m-series-e-accelerates-volume-production-of-co-packaged-optics/">Ayar Labs</a>)</p><h4>INTC Chair of the Board Frank Yeary retires; Dr. Craig Barratt Takes the Helm</h4><p>Frank Yeary&#8217;s retirement resulted in much rejoicing on X due his proposition a few years ago to split Intel&#8217;s business into product and manufacturing parts, and then sell off the manufacturing part. Many disliked this idea because Intel is a legendary company responsible for a large portion of the chip innovations in use today, and is the last bastion of US leading-edge semiconductor dominance. Dr. Barratt, who replaces him, has an engineering background and has decades of experience in Intel and Qualcomm. He is widely known for leading Atheros Communications, which was eventually bought by Qualcomm. Dr. Barratt seems to hold that view that Intel should be a world-class IDM.</p><p>(via <a href="https://www.tomshardware.com/pc-components/cpus/intel-chairman-frank-yeary-retires-craig-barrat-to-become-the-new-chairman-of-the-board-of-directors">Tom&#8217;s Hardware</a>)</p><h4>INTC Reportedly Reconsiders 18A for External Use, Hinting EMIB Could Generate Billions by 2H26</h4><p>Intel CEO Lip-Bu Tan has reversed course on the 18A node, now viewing it as viable for external foundry customers after initially planning to reserve it for internal use and position 14A as the primary foundry offering. The customizable 18A-P variant is reportedly drawing strong interest. Yields remain limited but are improving monthly. Intel reaffirmed the 14A timeline with risk production in 2027 and volume in 2029. Perhaps most notably, Intel&#8217;s EMIB advanced packaging technology could generate billions in revenue starting as early as H2 2026, with reports suggesting NVIDIA may adopt EMIB for future GPU production. </p><p>(via <a href="https://www.trendforce.com/news/2026/03/05/news-intel-reportedly-reconsiders-18a-for-external-use-hinting-emib-could-generate-billions-by-2h26/">TrendForce</a>)</p><h4>Memory stocks drop as US-Iran war threatens infrastructure and supply chains</h4><p>Samsung Electronics and SK Hynix fell 10%  as the Kospi index sank over 12% amid escalating US-Iran conflict. The selloff was driven by fears that prolonged conflict could disrupt energy supplies and shipping routes &#8212; nearly 20% of global oil flows through the now-closed Strait of Hormuz. Some news outlets mistakenly associated this with the SRAM-decode hardware expected to be announced at Nvidia GTC 2026 to be held next week.</p><p>(via <a href="https://www.scmp.com/tech/article/3345407/asian-tech-stocks-reel-us-iran-war-disrupts-energy-logistics-supply-chains">SCMP</a>)</p><div><hr></div><h4>Stuff I enjoyed reading this week</h4><ul><li><p><a href="https://www.chipstrat.com/p/broadcom-makes-lasers">Broadcom Makes Lasers?</a> &#8212; Austin Lyons has a nice piece comparing laser technologies between Broadcom, Lumentum, and Coherent.</p></li><li><p><a href="https://enertuition.substack.com/p/broadcom-is-not-likely-to-fly-on">Broadcom Is Not Likely To Fly On Those Big 2027 Forecasts</a> &#8212; BTH has a skeptical view on AVGO&#8217;s recent earnings call, especially pushing back on Hock&#8217;s comments on the Customer Owned Tooling (COT) threat.</p></li><li><p><a href="https://irrationalanalysis.substack.com/p/march-morgan-stanley-tmt-madness">March Madness</a> &#8212; Irrational Analysis is fun to read as always. I admire that he reversed his stance on optical clocking for Groq chips. More people should do this; stuff is complicated and its okay to change your mind.</p></li><li><p><a href="https://x.com/amytam01/status/2023593365401636896?s=20">The Cost of Staying</a> &#8212; Amy Tam on X has an interesting thought piece on the opportunity cost of staying in a job you don&#8217;t like.</p></li><li><p><a href="https://x.com/insane_analyst/status/2028925943621009531?s=20">QCOM Robot</a> &#8212; I had quite a laugh at what happens at the end of this video.</p></li></ul><p>Finally, an interesting plot, via <a href="https://x.com/JoshKale/status/2028889347794047071?s=20">@JoshKale on X</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ltM7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ltM7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 424w, https://substackcdn.com/image/fetch/$s_!ltM7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 848w, https://substackcdn.com/image/fetch/$s_!ltM7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 1272w, https://substackcdn.com/image/fetch/$s_!ltM7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ltM7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png" width="679" height="574" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:574,&quot;width&quot;:679,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!ltM7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 424w, https://substackcdn.com/image/fetch/$s_!ltM7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 848w, https://substackcdn.com/image/fetch/$s_!ltM7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 1272w, https://substackcdn.com/image/fetch/$s_!ltM7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccca1ea-784c-4f23-b972-faa71594ddca_679x574.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Have a great weekend!</p>]]></content:encoded></item><item><title><![CDATA[GTC 2026 Preview | Implications of Nvidia's SRAM-Decode Hardware on the Inference Market]]></title><description><![CDATA[The case for dedicated decode hardware and what it means for AMD, HBM, and the SRAM startup market.]]></description><link>https://www.viksnewsletter.com/p/gtc-2026-preview-implications-of-sram-decode</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/gtc-2026-preview-implications-of-sram-decode</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Wed, 04 Mar 2026 10:36:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e50dfb9e-d3f3-4f9c-ab31-2bfa99c0c2e5_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A recent <a href="https://www.wsj.com/tech/ai/nvidia-plans-new-chip-to-speed-ai-processing-shake-up-computing-market-51c9b86e">Wall Street Journal article</a> stirred excitement about NVIDIA&#8217;s upcoming GTC 2026 announcements:</p><blockquote><p>The company is designing a new system for &#8220;inference&#8221; computing, a form of processing that allows AI models to respond to queries, according to people familiar with the plans. The new platform, set to be revealed at Nvidia&#8217;s GTC developer conference next month, will incorporate a chip designed by startup Groq.</p></blockquote><p>The announcement is even more anticipated because Jensen compared Groq to Mellanox in the last earnings call, implying that just as Mellanox made Nvidia a networking company in 2020, Groq will transform Nvidia into an inference infrastructure company. Since Nvidia acquired Groq for an eye-watering $20B on Christmas Eve last year, the industry has questioned why Nvidia would choose an SRAM-based architecture with a notoriously challenging compiler especially when many believed Groq&#8217;s approach to be sub-optimal. With the rise of agentic AI, model providers like OpenAI have complained that existing systems are too slow. The acquisition of Groq is Nvidia&#8217;s answer to inference speed.</p><p>Here, we will explore how inference is actually two distinct hardware problems: prefill and decode. Prefill was addressed by the Nvidia CPX chip announcement last year. The upcoming Nvidia system will tackle the latter: specialized decode hardware, dubbed by some as Nvidia LPX.</p><p>After the paywall, we will discuss what might be announced at GTC 2026, mid-plane PCB rumors for inference accelerators, what it means for HBM, list of inference startups ripe for acquisition, what AMD needs to do immediately, and Groq&#8217;s achilles heel that SRAM-solutions must beat.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>A paid subscription gets you deep-dives like this one, on a variety of topics in AI hardware. Consider expensing it!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you don&#8217;t have a paid subscription, you can purchase just this article in epub and pdf formats using the button below. Check out all the reports available in this <a href="https://www.semiexponent.com/products">link</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.semiexponent.com/gtc-2026-preview-implications-of-nvidia-s-sram-decode-hardware-on-the-inference-market&quot;,&quot;text&quot;:&quot;Buy the ebook&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.semiexponent.com/gtc-2026-preview-implications-of-nvidia-s-sram-decode-hardware-on-the-inference-market"><span>Buy the ebook</span></a></p><h3>Different Hardware for Training vs Inference</h3><p>The AI age has focused on training first, which requires lots of matrix multiplications and high-bandwidth memory (HBM) to store, update, and backpropagate weights. Using the same hardware for inference is suboptimal because inference has different bottlenecks. Disaggregated inference today revolves around three phases, each requiring different hardware and software: prefill, decode, and orchestration. <a href="https://www.chiplog.io/p/a-deep-dive-into-nvidia-rubin-cpx">Chiplog</a> has a nice historical piece on how it came about.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9kFH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9kFH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 424w, https://substackcdn.com/image/fetch/$s_!9kFH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 848w, https://substackcdn.com/image/fetch/$s_!9kFH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 1272w, https://substackcdn.com/image/fetch/$s_!9kFH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9kFH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png" width="1456" height="701" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f48022ae-2df9-4433-8212-412b6a377925_1999x962.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:701,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9kFH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 424w, https://substackcdn.com/image/fetch/$s_!9kFH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 848w, https://substackcdn.com/image/fetch/$s_!9kFH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 1272w, https://substackcdn.com/image/fetch/$s_!9kFH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff48022ae-2df9-4433-8212-412b6a377925_1999x962.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Nvidia</figcaption></figure></div><p><strong>Prefill</strong> processes all input tokens simultaneously. Weights are loaded once and multiplied against the entire input sequence in a batch matrix-multiply to create the key-value (KV) cache. This is compute-bound, not memory-bound. Nvidia&#8217;s Rubin CPX targets only prefill and uses GDDR7 instead of HBM because memory bandwidth is not the bottleneck.</p><p><strong>Decode</strong> generates tokens sequentially because of the autoregressive nature of transformers; each token depends on all previous tokens. Since model weights are read from memory for every single token generated, this phase is bottlenecked by memory bandwidth and not compute. GPUs with HBM do not make the best decoding engines because SRAM bandwidth is an order of magnitude higher but with a capacity tradeoff. We have <a href="https://www.viksnewsletter.com/p/a-close-look-at-sram-for-inference?utm_source=publication-search">discussed the SRAM-HBM tradeoffs extensively before</a>. Currently, Nvidia&#8217;s decode solution is HBM-based and the acquisition of Groq and its IP eliminates the memory bandwidth issue.</p><p><strong>Orchestration</strong> ties it all together. Nvidia&#8217;s Dynamo handles KV-cache movement between prefill and decode, and manages <a href="https://www.viksnewsletter.com/p/context-memory-storage-tokenomics?utm_source=publication-search">KV-cache evictions to context storage</a> when it outgrows its current tier. In the agentic AI era, the CPU plays an increasingly important role by controlling inputs to prefill, processing outputs from decode, and managing all the software API calls via the Dynamo NIXL library that handles asynchronous communication between compute blocks.</p><h3>Why LPUs Work for Decode</h3><p>Groq&#8217;s LPU uses SRAM and a Very Large Instruction Word (VLIW) architecture where the hardware makes no runtime decisions, which makes it fast and deterministic to the last clock cycle. The real technological moat lies in orchestrating everything correctly with a cutting edge compiler.</p><p>The eight chips in a GroqNode are connected in an all-to-all fashion using short wires whose exact delay is well understood. Then each clock cycle, the chip executes one very wide preconceived instruction that simultaneously controls every functional unit on the chip, for example: &#8220;On this cycle, unit A performs this multiplication, unit B moves this data from SRAM bank 3 to bank 7, the network port sends this packet to chip 14.&#8221;</p><p>Such an operation is well suited to perform the fundamentally simple, sequential task of generating output tokens. In contrast, a general purpose GPU has to make thousands of runtime decisions like when to get data from memory, how to schedule threads, how data is routed between cores, and when operations complete. The flexibility is great but largely overkill, and arguably even bad, for a fixed decoding task. The fixed operation of an LPU allows the full use of the advertised SRAM bandwidth (80 TB/s) without any resource contentions and variability from cache hits/misses.</p><p>This is a capability that Nvidia did not have before the acquisition of Groq, and it adds the ability to control the dataflow within a compute block - that in spirit, is similar to systolic arrays in TPUs. The compiler is hard, but it is a solved problem and Nvidia also got the team that solved it. Groq&#8217;s chips still run on 14nm technology, and when ported to more recent leading logic nodes, can have much more SRAM per chip.</p><p>If you want to go deep into LPU architecture, <a href="https://blog.codingconfessions.com/">Confessions of a Code Addict</a> has a <a href="https://blog.codingconfessions.com/p/groq-lpu-design">great article</a>.</p><p>Here is what is after the paywall:</p><ul><li><p>An educated guess of what the GTC 2026 will look like, LPU scaling considerations, and mid-plane PCB rumors.</p></li><li><p>SRAM-inference impact on HBM and KOPSI correlation</p></li><li><p>The options for the upcoming land grab in SRAM-based accelerators</p></li><li><p>What AMD must do yesterday</p></li></ul>
      <p>
          <a href="https://www.viksnewsletter.com/p/gtc-2026-preview-implications-of-sram-decode">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[🍪 TWiC: AMD+Meta, New AI Chips, Citrini]]></title><description><![CDATA[This Week in Chips: Key developments across the semiconductor and adjacent universes.]]></description><link>https://www.viksnewsletter.com/p/twic-amdmeta-new-ai-chips-citrini</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/twic-amdmeta-new-ai-chips-citrini</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Fri, 27 Feb 2026 14:51:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/94feaa1e-cd75-4775-83e2-ec12d3ab31e9_685x249.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In a way, TGIF because I&#8217;ve had a lot of changes to deal with this week. I&#8217;ll post an update about it at some point. </p><p>When combined with a short family getaway last weekend, I&#8217;m running behind on the deep-dive for next week. I&#8217;m planning to revisit Lumentum&#8217;s tech in a bit more depth to understand their laser dominance, where their moat really likes, and for how long. I&#8217;ll let you know what I find.</p><p>As Feb draws to a close, here are some articles you might have missed this month. The CPU posts were very popular; I recommend checking them out if you haven&#8217;t.</p><ul><li><p><a href="https://www.viksnewsletter.com/p/lumentum-laser-demand-ocs-cpo-optical-scaleup">Lumentum: Laser Demand, OCS, CPO and Optical Scale-Up</a></p></li><li><p><a href="https://www.viksnewsletter.com/p/ai-datacenters-drink-more-water-than-you-think">AI datacenters drink more water than you think</a></p></li><li><p><a href="https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai">The CPU bottleneck in agentic AI and why server CPUs matter more than ever</a></p></li><li><p><a href="https://www.viksnewsletter.com/p/the-ai-datacenter-cpu-yellow-pages">The AI Datacenter CPU Yellow Pages</a></p></li></ul><p>Links to previous &#127850; TWiC posts this month: <a href="https://www.viksnewsletter.com/p/twic-googleaws-capex-nvidia-gpus">Feb 6</a>, <a href="https://www.viksnewsletter.com/p/twic-hbm4-siphos-rise-memory-eats-all-amat">Feb 13</a>, <a href="https://www.viksnewsletter.com/p/twic-meta-nvidia-ai-toilet-play-ai-eda">Feb 20</a>.</p><p>Semi Doped Podcast with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;id&quot;:8066776,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;uuid&quot;:&quot;178b5795-d872-44f4-905d-64411c208093&quot;}" data-component-name="MentionToDOM"></span> had some high engagement episodes this month. Lots of positive feedback from the interwebs; available on YouTube and all podcast platforms.</p><ul><li><p><a href="https://youtu.be/fPp8GQnGUg8?si=6175VmczZ2lyfsHt">OpenClaw Makes AI Agents and CPUs Get Real</a></p></li><li><p><a href="https://youtu.be/CFIdyjrtwgw?si=h61PzEeOyO01VYVa">A new era of context memory with Val Bercovici from Weka</a></p></li><li><p><a href="https://youtu.be/I1eKrYsgt1k?si=VuVu1w_KJ188ol0M">The future of financing AI infrastructure with Wayne Nelms, CTO Ornn</a></p></li><li><p><a href="https://youtu.be/3t1Oi-vs2JI?si=r8fKyAxtX-dK-wdX">Memory mayhem and capex madness</a></p></li><li><p><a href="https://youtu.be/Mvp-DAfzoRk?si=7BtUWPv1I9eVJxc5">Optical networking supercycle - ALL the tech you NEED to know</a></p></li></ul><p>I had a lot of plans to do scale-across, physical AI, and other stuff in Feb, but the industry moves to fast for me to actually plan ahead. I&#8217;ll just continue winging it.</p><p>Anyway, apart from NVIDIA&#8217;s earnings call this week (spoiler: they made a lot of money), there are some other interesting things to get to.</p><div><hr></div><h3>AMD&#8212;Meta to Deploy 6GW of AI Infrastructure</h3><p>The latest infrastructure deal between AMD and Meta is an exact replica of their deal with OpenAI last year - similar GPU commitments (6GW), warrant shares (10%), etc. These deals make it possible for AMD to make a land grab in the GPU space where NVIDIA holds 90%+ market share, and rapidly rise overall share in GPU infrastructure. The 10% shareholder dilution from each of these deals isn&#8217;t really a big deal considering that the last tranche will only fully vest when 6GW is purchased and AMD share prices hit $600. </p><p>An important takeaway from the recent CPU posts (linked above) on this newsletter is that the AMD Venice with its 256c/512t core count, high per-core IPC, and x86 arch makes it the Swiss army knife for reasoning or action-oriented agentic AI. NVIDIA Vera&#8217;s low core count and ARM-based architecture, and Intel Diamond Rapids&#8217; lack of SMT puts AMD as the prime CPU candidate (<a href="https://enertuition.substack.com/p/nvidia-vera-vs-amd-epyc-only-one">BTH makes a strong case too</a>). The upcoming CPU explosion combined with these massive deals puts AMD in a perfect position to grab a larger market share from NVIDIA.</p><p>It comes down to whether AMD&#8217;s MI450 platform is ready for first token in 2026 as AMD says, or if it will be delayed to 2027 like SemiAnalysis says.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Upgrade to a paid subscription for deep technical research into semiconductors and AI. Consider expensing it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>Chips for High Speed Inference</h3><p>Three inference chip startups dropped announcements in the same week, raising a combined $1B+ in venture funding: Taalas, MatX, and SambaNova. Each one is betting on an alternative architecture to Nvidia&#8217;s GPU, but differ significantly on what the right approach looks like.</p><h4>Taalas: Burn the Model into the Chip</h4><p>Taalas came out of stealth with the most extreme approach of the three. They hardcode a model&#8217;s weights directly into the chip. They&#8217;re secretive about how exactly it is stored, only <a href="https://www.nextplatform.com/compute/2026/02/19/taalas-etches-ai-models-onto-transistors-to-rocket-boost-inference/4092140">stating</a>:</p><blockquote><p><em>We basically have an architecture where we are embedding the models, and we are hard coding the models and the weights into our what we call the <strong>mask ROM recall fabric, which is paired with an SRAM recall fabric</strong>. Together, they are able to store both the model as well as do all the computations of KV cache.</em></p></blockquote><p>Their first chip named HardCore (HC1) is built on TSMC N6 and draws around 200 watts per card. They demoed it running Llama 3.1 8B at 17,000 tokens per second. HC2, targeting up to 20B parameters per chip, is expected in summer 2026.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oqlx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oqlx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 424w, https://substackcdn.com/image/fetch/$s_!oqlx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 848w, https://substackcdn.com/image/fetch/$s_!oqlx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 1272w, https://substackcdn.com/image/fetch/$s_!oqlx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oqlx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png" width="1456" height="693" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:693,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Chart showing speed comparison between Taalas and competitors - tokens per second per user&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Chart showing speed comparison between Taalas and competitors - tokens per second per user" title="Chart showing speed comparison between Taalas and competitors - tokens per second per user" srcset="https://substackcdn.com/image/fetch/$s_!oqlx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 424w, https://substackcdn.com/image/fetch/$s_!oqlx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 848w, https://substackcdn.com/image/fetch/$s_!oqlx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 1272w, https://substackcdn.com/image/fetch/$s_!oqlx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6ba376-fccf-4cef-845d-38cbade02e2b_2080x990.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Taalas</figcaption></figure></div><p>The obvious tradeoff is that each chip is locked to one model. Taalas says swapping to a new model only requires changing two metal layers, not a full redesign, and they claim a two-month turnaround from frozen weights to deployable PCIe cards. </p><p>You can try inferencing on this hardware on their <a href="https://chatjimmy.ai/">demo platform</a>. I tried it, and it was incredibly fast. The answer appeared nearly instantly but was extraordinarily wrong. I do understand that this piece of news might still not be in the training data, but a little web search functionality in this day and age of AI won&#8217;t hurt to bring some confidence to people trying this tech out. It even got their own company wrong!</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gzXS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gzXS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 424w, https://substackcdn.com/image/fetch/$s_!gzXS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 848w, https://substackcdn.com/image/fetch/$s_!gzXS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 1272w, https://substackcdn.com/image/fetch/$s_!gzXS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gzXS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png" width="684" height="217" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d104683c-1492-4379-be82-c0afae38ac2f_684x217.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:217,&quot;width&quot;:684,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:34870,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.viksnewsletter.com/i/188973578?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gzXS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 424w, https://substackcdn.com/image/fetch/$s_!gzXS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 848w, https://substackcdn.com/image/fetch/$s_!gzXS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 1272w, https://substackcdn.com/image/fetch/$s_!gzXS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd104683c-1492-4379-be82-c0afae38ac2f_684x217.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>It remains to be seen if customers will commit to &#8220;static&#8221; infrastructure deployments without the option to upgrade. As model releases have yet to reach a steady state and the use cases are still evolving, it is hard to see big commitments to deploy &#8220;hardened&#8221; LLMs at scale. It could be useful for low cost, smaller scale deployments where a current model is sufficiently accurate to provide inference needs for 3-5 years before inference chips need to be replaced with updated hardware.</p><h4>MatX: SRAM? HBM? No, Both.</h4><p>Typically, the choice between HBM-based accelerators (NVIDIA, Google) and SRAM-based accelerators (Groq, Cerebras) is an either/or situation which <a href="https://www.viksnewsletter.com/p/a-close-look-at-sram-for-inference">we have discussed in quite some depth</a>. There is always a latency versus throughput tradeoff that has to be contended with; SRAM-based approaches give low latency but cannot support large models, while HBM-based approaches support large datasets, but have higher latency.</p><p>MatX&#8217;s approach is to use both to bridge the gap between latency and throughput. In an <a href="https://www.youtube.com/watch?v=qvrdCpLPbuQ">interview</a>, CEO Reiner Pope explains how MatX broke up a large systolic array while preserving that energy and area efficiency systolic arrays are known for, and designed a freshly minted 4-bit variable precision numeric format to deliver the best AI chips at scale, even if it comes at the cost of smaller scale deployments and complexity. <a href="https://x.com/reinerpope/status/2026351870852358492?s=20">From Pope on X</a>:</p><blockquote><p>Mike Gunter and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. </p></blockquote><p>They claim higher throughput on LLMs than any announced system while matching the latency of SRAM-first designs, with over 2,000 tok/s on large MoE models. And unlike Taalas and SambaNova, MatX says their chip handles training, RL, and inference &#8212; not just decode. That&#8217;s a much broader TAM play if they can pull it off.</p><p>The $500M Series B was led by Jane Street and Situational Awareness, with Marvell Technology and the Stripe co-founders participating. Tape-out is planned within the year through TSMC, with shipments targeted for 2027.</p><h4>SambaNova SN50: The Incumbent Alternative</h4><p>SambaNova is the most mature of the three. They&#8217;ve been shipping hardware since 2021 and are now on their third generation with the SN50 RDU (Reconfigurable Dataflow Unit). </p><p>RDUs are a clever invention because they serve to minimize the number of off-chip data movements by having the compiler physically map all the required operations into different parts of the chip beforehand. They use a combination of SRAM, HBM and TBs of DRAM integrated into the accelerator card, so that any PCIe interfaces are avoided (<a href="https://www.viksnewsletter.com/p/d-matrix-in-memory-compute">d-Matrix puts DRAM on the card too</a>, along with SRAM on chip). The compiler essentially decides what goes where, and when movements should happen. This approach allows it to host models up to 10T+ parameters and support context lengths up to 10M tokens. </p><p>SambaNova is leaning hard into agentic workloads, and the three-tier memory is the reason they can. Multiple models can sit in DRAM and HBM simultaneously. When an agent needs to switch from a reasoning model to a code model to a tool-use model mid-chain, the RDU hot-swaps the active model into SRAM in milliseconds. They call this &#8220;agentic caching&#8221; &#8212; inference contexts for multiple models persist in memory so you don&#8217;t pay the reload cost every time you switch.</p><p>They raised $350M+ alongside a <a href="https://newsroom.intel.com/data-center/intel-and-sambanova-planning-multi-year-collaboration-for-xeon-based-ai-inference">multi-year strategic collaboration with Intel</a> to deliver inference solutions. <a href="https://www.bloomberg.com/news/articles/2026-02-24/intel-backed-sambanova-raises-cash-touts-softbank-chip-contract">SoftBank is the first SN50 customer</a>, deploying it in next-gen Japanese data centers. Shipments start H2 2026.</p><h3>The Citrini Selloff</h3><p>A <a href="https://open.substack.com/pub/citrini/p/2028gic?utm_campaign=post-expanded-share&amp;utm_medium=post%20viewer">post by Citrini Research</a> caused <a href="https://www.morningstar.com/news/marketwatch/20260223451/did-a-blog-post-just-cause-software-stocks-to-lose-more-than-200-billion-in-market-cap">big drops in the market</a> because it described a futuristic scenario in 2028 where pervasive agentic AI has taken over most of the software industry, displacing white collar jobs, and massively disrupting the economy. The piece is purely fictional, but the market response was not.</p><p>Perhaps the most criticized part of the piece was the AI-led destruction of the moats held by companies such as DoorDash and Uber Eats. The piece envisioned a marketplace of vibe-coded alternatives to food delivery companies which will destroy the &#8220;habitual intermediation&#8221; moat held by the incumbents.</p><p>Ben Thompson <a href="https://stratechery.com/2026/another-viral-ai-doomer-article-the-fundamental-error-doordashs-ai-advantages/">pushes back hard</a> on this argument in a Stratechery Plus article stating that DoorDash has an incredibly powerful network effect that strongly entrenches three parties: the driver network, the restaurant network, and the customer network. The ability to provide value to the customer (more selection), restaurants (more volume), and drivers (more pay) makes it an incredibly hard business model to displace with merely vibe-coded alternatives.</p><p>A <a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">nice piece by Citadel Securities</a> takes a much more data-driven, level-headed rebuttal to the original article. They argue that St Louis Fed data does not show that AI is actually displacing jobs, but could instead complement them &#8212; just like the introduction of Microsoft Excel did not destroy desk work jobs, but instead morphed them into a different function. </p><p>AI is also constrained by physics and finance: energy, raw materials, and manufacturing capacity could act as a natural limiter of exponentials, while if the marginal cost of compute rises above the marginal cost of human labor, then AI substitution instantly fails in an economic sense.</p><p>Everything is still speculation and the discussion on the future of AI is fantastic, but history&#8217;s lessons tells us that when productivity jumps occur, humans only work more, not less.</p><div><hr></div><p>Would love it if you could answer this simple question. &#128591;&#127997;</p><div class="poll-embed" data-attrs="{&quot;id&quot;:461628}" data-component-name="PollToDOM"></div><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-amdmeta-new-ai-chips-citrini?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Share it with 5 people. Get 1 month paid sub.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/p/twic-amdmeta-new-ai-chips-citrini?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.viksnewsletter.com/p/twic-amdmeta-new-ai-chips-citrini?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Datacenter CPU Yellow Pages]]></title><description><![CDATA[Grace, Vera, Venice, Turin, Diamond/Granite Rapids, Clearwater/Sierra Forest, Graviton, Cobalt, Phoenix, AmpereOne.]]></description><link>https://www.viksnewsletter.com/p/the-ai-datacenter-cpu-yellow-pages</link><guid isPermaLink="false">https://www.viksnewsletter.com/p/the-ai-datacenter-cpu-yellow-pages</guid><dc:creator><![CDATA[Vikram Sekar]]></dc:creator><pubDate>Tue, 24 Feb 2026 13:21:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/754d30f1-1b70-4131-b083-869177b41afb_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In last week&#8217;s article, we discussed why CPUs are critical in the era of agentic AI. The &#8220;operational burden&#8221; of the orchestrating AI tasks falls on the CPU, which affects GPU utilization rates and eventually TCO. Check out the whole post.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c982d5c4-9fcd-491e-b75e-9ee3f2cf2879&quot;,&quot;caption&quot;:&quot;For the better part of two years, CPUs have been an afterthought in AI infrastructure while GPUs got all the attention for training, and more recently inference. In the last 6 months, the rise of agentic AI is proving to be the &#8220;killer app&#8221; that AI inference needed.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The CPU Bottleneck in Agentic AI and Why Server CPUs Matter More Than Ever&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:124411709,&quot;name&quot;:&quot;Vikram Sekar&quot;,&quot;bio&quot;:&quot;EE PhD. 15+ years in semiconductor engineering. I've helped build the technology I write about.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!RTM-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5e3e259-005c-4bcf-bffd-1a20cd78aa86_1080x1080.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-17T06:01:31.028Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a3b3c30-89da-4ddc-876b-182e455e96e3_1456x1048.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:188043920,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:58,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2065897,&quot;publication_name&quot;:&quot;Vik's Newsletter&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9JlA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa409d69d-ca10-4bfe-a1fc-f8d291690566_185x185.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>In the paywalled portion of the above post, we compared 16 different server CPUs and scored them across nine different metrics to assess their suitability towards reasoning or action-oriented workflows. In this post, we will show the work behind the CPU scores, discuss each rating, and provide a brief use-case recommendation. </p><p>In an ideal world, I would imagine the scoring to be based on actual workloads &#8212; something along the lines of <a href="https://inferencex.semianalysis.com/">InferenceX</a>/<a href="https://www.clustermax.ai/">ClusterMAX</a> by SemiAnalysis. I do not have access to hardware nor the skills to actually do this. So we will compare known/reported features.</p><p>Claude Opus 4.6 was used to help collate and organize the results. A lot of the information is parsed from <a href="https://open.substack.com/pub/semianalysis/p/cpus-are-back-the-datacenter-cpu?utm_campaign=post-expanded-share&amp;utm_medium=web">SemiAnalysis&#8217; extensive reporting</a> on the landscape of datacenter CPUs, with added information to compare CPUs across a wide range of features. AI has been immensely useful in this research and helped speed up analysis that would have taken far too long for one person to do manually. I have done my best to verify the accuracy of metrics and the ratings. Please contact me if you find errors.</p><p>Below is a list of CPUs that are covered in this article. You can access each of them out of order using the sidebar on Substack. I might continue to update/add to this server CPU database which is why this will remain purely on Substack for paid subscribers. No downloadable assets will be available.</p><ul><li><p>NVIDIA (Grace, Vera) &#8212; <em><strong>For free subscribers</strong></em></p></li><li><p>AMD (Venice Dense, Venice Classic, Turin Dense)</p></li><li><p>Intel (Diamond Rapids, Granite Rapids, Clearwater Forest, Sierra Forest)</p></li><li><p>ARM Hyperscaler (AWS Graviton5, Microsoft Cobalt 200)</p></li><li><p>Other ARM (ARM Phoenix, AmpereOne M)</p></li><li><p><em>Excluded: Google Axion (low core counts), Huawei Kunpeng 950 (too little known)</em></p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.viksnewsletter.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Support extensive research with a paid subscription. You can expense it too!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Scoring Framework</h2><p>Here is a quick recap of the scoring framework. Each CPU is scored on 9 metrics using a 1-5 point scale, where higher is better. The points are later weighted depending on whether the feature is more suitable for an agentic or action-oriented workload.</p><ol><li><p><strong>Per-core performance:</strong> Higher clock speeds and/or higher IPC is good for both reasoning and action workloads. Fast CPUs finish tasks quickly, and watches GPU tokens closely to determine next steps.</p></li><li><p><strong>Core count:</strong> Higher is better for action workloads, not important for reasoning workloads. More cores = more agents. Multi-threading helps a lot.</p></li><li><p><strong>CPU-xPU interconnect bandwidth:</strong> Higher is better for long-context reasoning workloads so that large batches of tokens can be exchanged quickly; not important for action workloads that only use the GPU for short durations.</p></li><li><p><strong>Memory BW and capacity:</strong> More is better for both reasoning and action workloads. Reasoning workloads will evict to lower memory tiers; action workloads need working memory for large number of agents.</p></li><li><p><strong>Cache size:</strong> More is better for action workloads so that small data does not have to leave the chip. This means lower latency. Not important for reasoning workloads because of large memory traffic that won&#8217;t fit in cache anyway.</p></li><li><p><strong>Performance/watt:</strong> More is better for action workloads especially if CPU counts in the datacenter grow; unimportant for reasoning workloads because power is dominated by GPU use.</p></li><li><p><strong>PCIe speeds:</strong> More is better for action workloads because lots of external accesses are involved, unimportant for reasoning workloads where data exchange is mostly between CPU-xPU.</p></li><li><p><strong>ISA:</strong> x86 better for action workloads only because of the maturity of the software ecosystem; either is okay for reasoning workloads where there are not a wide variety of tools being called. x86 = 5 points, Arm = 3 points, in the scoring system.</p></li><li><p><strong>NUMA latency:</strong> Lower is better for action workloads because all agents will have similar memory access latency; unimportant for reasoning workloads that only use a few cores. Fewer chiplets in CPU design usually means lower NUMA latency.</p></li></ol><p><strong>Composite Scores:</strong></p><ul><li><p>Reasoning composite is weighted on: per-core performance, xPU interconnect bandwidth, and memory BW/capacity</p></li><li><p>Action composite is weighted on: core/thread count, cache, perf/watt, PCIe, ISA (x86 advantage), and NUMA</p></li></ul><p>The composite scores are calculated based on the average scores of the metrics that matter to either workload. The score is then adjusted up or down based on market dynamics, historical trends and company specifics. This is admittedly not very scientific, but then neither is the real world when it comes to choosing a CPU.</p><div><hr></div><h3>NVIDIA</h3><h4>Grace</h4><blockquote><p><strong>Reasoning: 3 | Action: 1 | Best Fit: Reasoning</strong></p></blockquote><ul><li><p><strong>Cores/Threads:</strong> 72 cores / 72 threads</p></li><li><p><strong>Process:</strong> TSMC 4N | <strong>TDP:</strong> 200-250W (single die)</p></li><li><p><strong>Socket:</strong> BGA (platform-locked)</p></li><li><p><strong>Memory:</strong> LPDDR5X, 480 GB capacity, 512 GB/s bandwidth</p></li><li><p><strong>I/O:</strong> PCIe5 | NVLink-C2C 900 GB/s bidirectional to attached GPUs</p></li></ul><p><strong>Status:</strong> Available since mid-2023 | Shipping in production systems<br><strong>Access:</strong> Platform-locked &#8212; only available in NVIDIA superchip configurations<br><strong>Roadmap:</strong> <strong>Grace</strong> &#8594; Vera (H2 2026)</p><p><strong>Per-core perf (3/5):</strong> Neoverse V2 cores with 1MB L2. According to SemiAnalysis, Grace has a significant branch prediction bottleneck due to which AI workloads are currently being slowed by the Grace CPUs in GB200 and GB300. Hopper used x86 CPUs, and Vera uses custom Arm cores - suggesting that the &#8220;off-the-shelf&#8221; Neoverse cores used in Grace just didn&#8217;t work out.</p><p><strong>Core count (1/5):</strong> 72 cores / 72 threads. No SMT. Lowest core count among the non-specialty CPUs. Insufficient for action workloads. A <a href="https://blog.vllm.ai/2026/02/01/gpt-oss-optimizations.html">blog post by the NVIDIA and vLLM team</a> suggests that the CPUs could not keep up with GPUs, and needed special optimizations to keep GPU utilization high. </p><p><strong>xPU interconnect (4/5):</strong> NVLink-C2C at 900 GB/s bidirectional. Second-best GPU interconnect, behind Vera&#8217;s 1.8 TB/s. Coherent memory access allows GPU to access CPU memory at full bandwidth.</p><p><strong>Mem BW &amp; capacity (2/5):</strong> 480GB LPDDR5X at 512 GB/s. Decent bandwidth but limited capacity &#8212; approximately one-third of Vera&#8217;s 1.5 TB. LPDDR5X keeps non-GPU power down.</p><p><strong>Cache size (2/5):</strong> 114MB L3 + 72MB L2 (1MB per core). Modest total of ~186MB.</p><p><strong>Perf/watt (3/5):</strong> LPDDR5X helps with power efficiency, and the simpler Arm ISA leads to better efficiency too, but the performance itself is not top of the line either.</p><p><strong>PCIe (2/5):</strong> PCIe5 with limited lane count &#8212; NVIDIA focused the IO budget on NVLink-C2C, not PCIe connectivity.</p><p><strong>ISA (3/5):</strong> ARM. Same ecosystem lock-in as Vera &#8212; typically only available as part of NVIDIA&#8217;s superchip platforms (Grace Hopper, Grace Blackwell). Reports state that NVIDIA is deploying both Grace and Vera CPUs in standalone formats as part of their recent partnership with Meta.</p><p><strong>NUMA (5/5):</strong> Single die with 6x7 mesh, 72 cores. Very clean, uniform memory access.</p><p><strong>Recommendation:</strong> Grace is a reasoning CPU that&#8217;s being superseded by Vera. Its NVLink-C2C gives it a clear reasoning advantage over non-NVIDIA CPUs, but if true, the branch prediction bottleneck actively slows AI workloads. It is unsuitable for action workloads with only 72 threads and no SMT. Vera is a strict upgrade in every dimension.</p><div><hr></div><h4>Vera</h4><blockquote><p><strong>Reasoning: 5 | Action: 2 | Best Fit: Reasoning</strong></p></blockquote><ul><li><p><strong>Cores/Threads:</strong> 88 cores / 176 threads (Spatial Multithreading)</p></li><li><p><strong>Process:</strong> TSMC N3 | <strong>TDP:</strong> Not disclosed</p></li><li><p><strong>Socket:</strong> CoWoS-R package (BGA, platform-locked)</p></li><li><p><strong>Memory:</strong> 8x SOCAMM LPDDR5X, 1.5 TB capacity, 1.2 TB/s bandwidth</p></li><li><p><strong>I/O:</strong> PCIe6 + CXL3 on separate IO chiplet | NVLink-C2C 1.8 TB/s bidirectional to Rubin GPUs</p></li></ul><p><strong>Status:</strong> H2 2026 (NVL72 platform)<br><strong>Access:</strong> Platform-locked to Rubin NVL72 (72 GPUs + 36 Vera CPUs + 18 compute blades per rack) | Standalone Vera available through select partners<br><strong>Roadmap:</strong> Grace &#8594; <strong>Vera</strong> &#8594; TBD</p><p><strong>Per-core perf (5/5):</strong> Custom Olympus ARM core &#8212; NVIDIA&#8217;s own design, not off-the-shelf Neoverse V2 used in Grace. Supports SMT for 176 threads. 2MB L2 per core. NVIDIA claims 2x performance improvement over Grace. ARMv9.2 with SVE2 FP8 operations.</p><p><strong>Core count (2/5):</strong> 88 cores / 176 threads. Deliberately modest &#8212; this CPU is designed to run a small number of tasks extremely fast, not hundreds of agents. 176 threads is not the best core-count for large-scale agent orchestration.</p><p><strong>xPU interconnect (5/5):</strong> NVLink-C2C at 1.8 TB/s bidirectional &#8212; the highest CPU-to-GPU bandwidth of any CPU here. Coherent memory sharing lets Rubin GPUs read directly from CPU memory. This is the defining feature that makes Vera a reasoning CPU.</p><p><strong>Mem BW &amp; capacity (5/5):</strong> 1.5 TB of LPDDR5 across 8 SOCAMM modules at 1.2 TB/s bandwidth. Massive capacity for KV-cache expansion. LPDDR instead of DDR5 DIMMs keeps power low while maintaining bandwidth.</p><p><strong>Cache size (3/5):</strong> 162MB L3 spread across 7x13 mesh, plus 2MB L2 per core (176MB total L2). Adequate for managing inference state and metadata but not exceptional for the core count.</p><p><strong>Perf/watt (3/5):</strong> LPDDR5 helps with power but this isn&#8217;t an efficiency-first design. The GPU dominates the power budget anyway.</p><p><strong>PCIe (4/5):</strong> PCIe6 with CXL3 on a separate IO chiplet. Available for networking and storage, but the primary GPU data path is NVLink-C2C, not PCIe. PCIe becomes the path to NVMe for KV-cache tier-3 offload.</p><p><strong>ISA (3/5):</strong> ARM. Locks you into the NVIDIA ecosystem &#8212; Vera only ships as part of the Rubin platform (1 Vera CPU to 2 Rubin GPUs per superchip), although you apparently can buy Vera standalone. ARM ecosystem is adequate for reasoning workloads but less versatile for diverse tool execution.</p><p><strong>NUMA (5/5):</strong> All 88 cores on a single reticle-sized compute die (3nm) with the full 7x13 mesh. Memory and IO are on separate chiplets (6 dies total on CoWoS-R), but all cores share one die. Single, clean NUMA domain for compute &#8212; no cross-die core-to-core penalties. Cleanest NUMA topology at this core count.</p><p><strong>Recommendation:</strong> Vera is the gold standard for reasoning workloads. It scores 5 on the three metrics that matter most for reasoning (per-core perf, xPU interconnect, memory BW/capacity). Its entire design philosophy is: massive pipe to the GPU, fast cores, lots of memory, nothing else gets in the way. It scores poorly on action metrics because the core count is too low.</p><div><hr></div><h3>AMD</h3>
      <p>
          <a href="https://www.viksnewsletter.com/p/the-ai-datacenter-cpu-yellow-pages">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>