Thursday, July 10, 2025

Thursday Night Links

  • Yes, the infant mortality rate has dramatically declined. But the reality is that mortality rates have dramatically declined—at all ages. While it wouldn’t be unusual to see a 70-year-old in the past, the majority did not survive to that age, even among those who escaped the deadly first few years. Today, it is more unusual to not survive to age 70. [Patterns in Humanity]
  • The original rationale behind broad tariffs was that they would strengthen and protect domestic manufacturing, but when that thesis was contradicted, the messaging pivoted to “tariffs can replace the income tax”. When it became clear that wasn’t true, the message became “tariffs give us leverage to force neighbors to secure their borders / lower their tariffs / pay more for defense.” From the outside, it’s clear that these different narratives are contradictory (e.g. a crucial source of revenue or pillar of industrial policy can’t be bargained away) and that tariff supporters are engaging in thesis drift to rationalize their original policy. It would be tempting to write this off as a quirk of Trumpism, but this is actually an issue with all flawed populist policies that manage to gain traction. Populist policies gain support because, to borrow the words of Dan Williams, they feel like they should work. People support tariffs because it feels like tariffs should make us richer, no matter what the experts say. The problem with populist policies is that they only feel like they should work. In reality, they are based on flawed models of the world, and as a result they produce undesirable outcomes that were predicted by experts but come as a surprise to their supporters. [MD&A]
  • Everybody who’s run an institution, large or small, knows that as the institution grows, the quality of the information the leader gets is worse and worse. Every bit of data gets padded and massaged and reformulated as it makes its way up through layers of yes-men and of honest men with agendas. The only solution, as countless CEOs and presidents and generals and monarchs have discovered, is to “go deep,” reaching down through all the protesting layers of middle managers to find out the ground truth for yourself. The middle layers hate it when you do this, and they hated Cheney for it, but this is a good way to get results — and alas (since he would become the biggest booster of the war), it worked for Cheney. [Mr. and Mrs. Psmith’s Bookshelf]
  • Generative AI systems, such as ChatGPT, appear to be proficient at generating quality text and may be used to create textbooks and other learning materials. The ability of providers of generative AI systems to utilize or feed copyrighted content or proprietary data of others to train such AI systems without permission from the copyright owner is currently unsettled under the law both within and outside the United States. There are numerous lawsuits in the United States in progress addressing issues under U.S. copyright law, focusing on the question of whether the copying or reproduction of proprietary content without the consent of the copyright owner to train generative AI models constitutes fair use. There is also increased pressure from leaders of technology and AI organizations and companies to decrease copyright protections in general. [McGraw Hill, Inc.]
  • Broad-spectrum dampening of the reward system is a terrible fate. Some antipsychotic drugs like haloperidol do this. Take too much haloperidol, and you’ll sit motionless until you die, because no action feels worth it. But the existence of silver bullet anti-addiction medications - Ozempic isn’t the only one, naltrexone seems to treat a whole host of different drug and behavioral addictions - suggests there’s also a sort of narrow-spectrum dampening, one which affects addictions and nothing else. Why? Isn’t addiction just the extreme version of normal wanting? Apparently not. None of these anti-addictive drugs affect wholesome rewards like the feeling of a job well done or a child’s smile. Just drug addictions, and a few compulsive behaviors like porn and gambling. Maybe the job well-done and the child’s smile get implemented partly through some system other than dopamine (oxytocin?), or maybe these medications lop off some extreme part of the reward distribution that only addictive drugs ever reach in real life. [Scott Alexander]
  • For what I think are mostly sociological reasons, people who have built neural networks such as LLMs have mostly tried to do without explicit models, hoping that intelligence would “emerge” from massive statistical analyses of big data. This by design. As a crude first approximation, what LLMs do is to extract correlations between bits of language (and in some cases images) - but they do this without the laborious and difficult working (once known as knowledge engineering) of creating explicit models of who did what to whom when and so forth. It may sound weird, but you cannot point to explicit data structures, such as databases, inside LLMs. You can’t say, “this is where everything that the machine knows about Mr Thompson is stored”, or “this is the procedure that we use to update our knowledge of Mr Thompson when we learn more about Mr. Thompson”. LLM are giant, opaque black boxes with no explicit models of the world at all. Part of what it means to say that an LLM is a black box is to say that you can’t point to an articulated model of any particular set of facts inside. (Many people realize that LLMs are “black boxes”, but don’t quite understand this important implication.) [Marcus on AI]

No comments:

Post a Comment