Natural Gas-Fired Memos
A friend of ours recently used Claude, the LLM built by Anthropic PBC, to draft a letter to a public company CEO with suggestions for improving a struggling subsidiary after a disappointing Q4 2025 earnings report. It was not quite an activist letter, more like a thoughtful memo from a concerned shareholder.
When he sent us the draft, we winced a little. It was obviously AI-written. Chatty and breezy. It used the CEO's first name repeatedly in a way that felt like a car salesman working a showroom floor. We assumed that he would let us help edit it before sending. Instead, he fired it off on a Friday afternoon, as-is.
We thought he had blown his credibility by sending it, so we were shocked that the CEO responded to his correspondence the same day. They had a two hour, candid phone call to share ideas the following Monday morning. Six months ago, "AI-written" was unambiguously a pejorative. Apparently that is changing faster than we thought.
As Luddites who don't like change (we own coal royalties!), we have been AI skeptics for most of the past several years. We chuckled at Phil Greenspun's recent post, Why Johnny LLM can’t read web page source code. We read Stephen Wolfram's book a year ago. He was impressed with LLMs but doubted they would all the way to AGI. These seemed like reasonable positions.
What started to shift our thinking was The Scaling Era, which pointed us to a 2019 essay called The Bitter Lesson by Richard Sutton. His argument was that general methods for artificial intelligence that leverage computation always win when competing with methods that attempt to take advantage of domain-specific human knowledge. Historically, AI researchers have always tried to build knowledge into their systems by encoding rules, designing expert systems, and programming in domain expertise.
The pattern in the past is that this domain-specific structuring has helped in the very short run but it always plateaus and then is outcompeted by brute force computational methods. In the games of Chess or Go, search and learning are the techniques that have worked, using as much computational power as possible. Sutton says that these are the only two methods that seem to scale to arbitrarily large degrees.
This reminded us of a book by psychologist Paul Meehl (1920-2003), who argued essentially the same thing sixty-five years earlier in a completely different field. His book Clinical vs. Statistical Prediction (published 1954) showed that actuarial rules (statistics applied to data) consistently outperformed clinical judgment by professionals. Whether they were doctors, parole boards, or admissions officers, human experts with years of training were consistently beaten by simple models. (This has been called The Robust Beauty of Improper Linear Models in Decision Making.)
Meehl and Sutton had the same insight. Systematic data processing beats human intuition, and the gap widens as the amount of data and especially the computation available to mine it grows. LLMs are like Meehl's simple linear models for decisionmaking, except scaled to the size of trillions of tokens now available for training.
The clincher for us was the conclusion of The Scaling Era. The author believes in what we have been calling the Gods of Straight Lines: "the GPUs keep improving, the training runs keep scaling, and on a log-log graph everything is eerily, perfectly linear. Next year is simply this year, plus a known rate of change multiplied by delta-t."
We've learned not to fight straight lines on log charts. We now take it for granted that solar panels and batteries will continue to get cheaper every year. Not only have the learning curves been relentless for decades but they have even gotten steeper. The curves governing LLMs look the same. Whether or not we or the AI researchers fully understand why their models work, the scaling curves don't care about our epistemology.
That, in turn, virtually assures that the Mag 7 companies (Zuck) and the AI "labs" (Elon doesn't like that term) will spend their planned trillions on capital expenditure for GPUs and data scenters. Leopold Aschenbrenner is a former OpenAI researcher who published a widely-read series on AI trajectories last year. He has sketched out scenarios involving trillion-dollar compute clusters drawing 100 gigawatts of power. That number seems outlandish until you look at the capex announcements from the past six months and realize the industry is already on that trajectory.
We have started calling our friend's memos his "natural gas-fired research." (Natural gas is the largest source of electrical generation in the U.S., meaning that is what most likely powers any given GPU writing memos.) Which is why we keep coming back to the natural gas pipelines owned by Kinder Morgan (KMI) and Energy Transfer (ET). There is an important distinction between producing natural gas and transporting it. Producing it is a lousy business, but transporting it via pipeline is something else entirely. The pipes are already in the ground. Permitting and right-of-way acquisition to build new ones is enormously difficult. Customers sign take-or-pay contracts that guarantee payment regardless of whether they actually use the capacity. It is a toll road, not a commodity business.
Kinder Morgan operates about 58,600 miles of natural gas transmission pipeline, more than any other company in the country, moving roughly 40% of U.S. natural gas production. Around 96% of its cash flows are take-or-pay, fee-based, or hedged, making the commodity price of natural gas nearly irrelevant to its earnings. Energy Transfer's latest presentation is explicit about what is driving new growth. Its 2026 capital plan lists "natural gas pipeline projects serving data center facilities" as a priority in both its intrastate and interstate segments. It has a long-term agreement with Oracle to supply roughly 900,000 MMBtu per day to three U.S. data centers, and began flowing gas on its first lateral to a data center campus near Abilene, Texas earlier this year.
The pipeline companies collect a toll on natural gas movement, and their inherent operating leverage will makes incremental throughput for data centers highly profitable. Beyond data centers, there are coal plant retirements, Sun Belt population growth, and manufacturing reshoring all adding to the natural gas story. Kinder Morgan points out in their investor presentation that U.S. natural gas supply has grown more than 70% since 2010 with prices remaining largely range-bound, suggesting no supply constraint on the horizon.
As AI infrastructure spending scales, the competition for land near power infrastructure will intensify. A wind or solar farm on a large tract with good resource potential can supply a data center directly via long-term power purchase agreement, or feed into a grid that increasingly needs it.
We've also seen some overwrought pessimism surrounding AI-induced disruption. The Citrini piece (The 2028 Global Intelligence Crisis, published February 22nd) triggered a software selloff the following day. We think that was overdone. As Fred Liu pointed out, software companies don't really sell code. They sell convenience, trust, reliability, and institutional knowledge. The devil is always in the details like maintenance, security patches, compliance, and integrations. Most businesses would rather pay someone else to handle all of that than own it themselves.
Our Docusign subscription just auto-renewed. We're still using Quickbooks and Turbotax. Last week we chose to fade Citrini's pessimism with a basket that included the payment networks (Visa, Mastercard, Amex), the bank core processors (FIS, Fiserv, Jack Henry), Booking Holdings, Intuit, and Schwab.
Visa's experience has been that potential competitors just become customers of their (four-sided!) network, which is the biggest and the best. The bank core processors (i.e. the software back-end of almost all banks) have contracts that define "sticky": five to ten year commitments by the banks, plus existential risk to the bank's business if an attempted switch goes poorly.
Futurist James Pethokoukis (previously) articulated two possibilities for AI: either it is a powerful general-purpose technology (like the PC or the internet), or it advances all the way to AGI, capable of performing essentially every economically valuable task humans perform.
Vaclav Smil's book Growth points out that exponential growth, natural or anthropogenic, is always only a temporary phenomenon, to be terminated due to a variety of physical, environmental, economic, technical, or social constraints. Eventually every exponential growth curve becomes logistic.
Whether that happens at "very powerful tool" or at "AGI" is the question. If it's AGI, our stock portfolios will not matter. What will matter is who controls it and what they decide to do with the rest of us.
But if AI lands at "powerful general-purpose technology," and that is the scenario we currently find most likely, then the implications are good. A sustained productivity boost would push the economy onto a higher growth path. That makes the debt and entitlement problems of the developed world more manageable. And to the extent that AI advantages incumbents with established distribution and customer trust, the companies in our basket look like reasonable places to be.
And pipelines will profitably go "ssssssss" natural gas to power the computation, at least until battery and solar get cheap enough to take over. (But we can own the land where that will happen.)
Our friend got instant followup from the CEO of a billion dollar company because a machine wrote a persuasive memo on a Friday afternoon. That's not AGI, but it might be enough.
No comments:
Post a Comment