The Tax Cuts and Jobs Act prohibits advance refundings going forward because they increase the supply of tax-exempt bonds (at least until the old bonds are called), thereby shaving the revenue to the Treasury.

According to the Joint Committee on Taxation, the end of advance refundings will save the federal government $17 billion over the next ten years:

Of course, that’s not much in comparison to the overall projected effect of tax reform:

Over the next few years, perhaps $350 billion in escrow Treasuries will mature. These will have to be refinanced without the help of new advance refundings. The run-off from escrows is not as large as the Fed’s expected balance sheet reduction (as much an obsession to the markets as a sizzling steak to a dog), but the escrow wind-down could match about a third of the Fed’s roughly $1 trillion planned disposal of Treasuries. The loss of escrow funding to the Treasury, and the additional supply to the market, could be prove significant.

**Addendum:**

The key to the $350 billion estimate for escrow securities was the S&P Municipal Bond Prerefunded/ETM Index. The index includes about $190 billion in bonds that have been escrowed to a call date (“ETC”) or to escrowed to maturity (“ETM”). I scaled up the prerefunded bonds by taking the ratio of the whole S&P municipal bond universe ($2.2 trillion) to the full reported size of the municipal market ($3.8 trillion).

I was interested to see that the average coupon of the prerefunded bonds in the index is 4.9%. This seems to confirm the prevalence of 5% coupons in recent years. These high coupons (their yield is now 1.6%) were among the most attractive candidates for refunding.

The escrow Treasuries generally have much lower coupons than the refunded bonds they support, so more than $1 in Treasuries are needed to cover the interest payments on each $1 of refunded bonds. This relationship implies that $350 billion could be a conservative estimate for Treasuries in advance refunding escrows. On the other hand, some of the escrows are so short that they are considered to be “current refundings” instead of advance refundings. Current refundings are still allowed, but their short shelf life has probably limited their impact on the index. To the extent that some bonds in the index correspond to current refundings, the $350 billion estimate could be on the high side. Another distortion would be if the prerefunded share of the S&P index universe does not match that of the whole market.

*Addendum added January 21, 2018*

The editorial warned of a growing burden of interest payments on the national debt, due to rising interest rates and the anticipated drawdown of the Fed’s portfolio of Treasury securities. In my letter, I point out that the problem is exacerbated by two factors. First, the short maturity structure of the Treasury’s debt offers little protection against higher interest rates.

The second factor is the uneven composition of the securities held by the Fed. The Fed neutralizes the interest payments it receives by returning them back to the Treasury. Because the Fed tends to hold the Treasury’s higher-rate securities, the Treasury will sorely miss the Fed’s subsidy when it’s gone.

]]>

As the crosses get smaller, each new one must shrink by .

I asked David to explain his construction.

The segment marked “1” is of the “radius” from a corner of the outer cross to the center, so the construction generates an inner cross with the required radius, as long as the original radius.

A nice feature of this construction is that the compass is only used near the beginning. After that, the construction always provides a scaffold for the next inner cross.

]]>

A solution that occurred to me would be to use an infinity of nested crosses. If the area of the cross shrinks at each step by one-third, then the combined area of the colored regions converges to one-quarter of the large cross. This works because the alternating geometric series

converges to . Note that if the area shrinks by one-third, the width of the cross must scale down by the square root of 3. So every two steps the width shrinks by one-third, placing a little cross precisely inside the middle square of a larger cross.

]]>I address whether the U.S. Treasury debt should be lengthened and whether it should sell 50-year or 100-year bonds. I think that “ultra-long” bonds are a good idea but that there won’t be enough demand for them to significantly reduce the Treasury’s interest rate risk. This should be accomplished by reshaping the whole distribution of maturities, which is currently very front-loaded. I also argue that the Treasury should update its measures of interest rate risk, and keep a close eye on how the Fed manages its Treasury portfolio.

Please check out the post and let me know what you think!

*I am grateful to my wife Corrina for her essential suggestions and thoughtful editing through countless revisions. *

Click here to see our slides. They include a conceptual bond glossary that I prepared.

I look forward to a great conference (with maybe a little jazz on the side).

*Updated 10/3/2016 with link to presentation slides.*

The Bond Buyer published my commentary Taming Premium Bonds earlier today. Although callable premium bonds are very popular in the municipal market, I argue that they hurt issuers and the market. Market rules dictate that the proceeds to the issuer compensate only for high coupons to the call date. The compensation for high coupons past the call date is buried inside the call option, making a refunding almost inevitable. These bonds also make the market more opaque. As a solution, I propose that the call premiums be set to breakeven levels so that the price-to-call matches the price-to-maturity. Please see the article for the details.

Update: This not a new idea. I applied it as a financial advisor many years ago, and it is also the basis for one of the puzzles on this blog.

]]>The purpose of the conference is to bring municipal finance professionals together with academics to encourage useful research and better practice. This year’s program will include several papers on financial distress and other challenges for municipalities. The keynote speaker will be Governor Alejandro Garcia Padilla of Puerto Rico.

As a discussant, I will respond to a paper that attempts to account for the potential benefits of future refundings within the interest cost of a bond or bond issue.

I understand that there may still be openings for the conference. Click here for more information, and to register. I hope to see you there.

]]>Marie runs Ysais SEO, which helps businesses optimize their search engine results. She asked me to help with an Excel workbook to create Google sitemap links. I took the file she sent and worked it into a tool that can create several links for each specified website.

I thought I would share the workbook because it makes a good example of how to use Excel’s table feature and its HYPERLINK function. With Marie’s consent, here is the workbook. Click on the image to open the file.

There are four sheets, each with its own table (note that these are not “data tables”). The first three tables provide the building blocks for the links: the Google sitemap text (the “prefix”), the domain names, and part specifiers (the “suffix”). On “SiteMaps”, the last tab, a table generates hyperlinks for all possible combinations of the three text blocks.

To use the workbook, make sure that the prefix text meets your needs. As with all inputs in this workbook, the prefix text is colored **blue**. You can insert rows above if you need more than one prefix. Next, enter the domain names. There is room for 500 domain names. Blanks are fine, and you can always insert more rows if you need more. The final input step is to review the suffix text. You can insert more suffix items if you need more.

Now go to the sitemap generator on the last sheet. Click on the “Good?” filter and select “TRUE” only. This will hide the blank links. Be sure that the bottom of the “Last?” column shows a “TRUE” value. This insures that the last possible combination of your text blocks has been reflected in the sitemap list. If not, you will need to insert more rows into this table.

Marie reports that the workbook also made it easy for her to create Bing sitemaps.

Note: a good way to insert rows into any of these tables is to click on the cell labelled “Count” at the bottom of the first column. Insert the number of needed rows and copy down any formulas from above as needed.

Please let me know if you have any questions.

]]>

**Spoiler Alert: Stop here if you want to solve the puzzle yourself! Answers (some correct and some not) will be given below.**

This innocent-seeming puzzle stirred several math teachers to exchange ideas, diagrams and animations. Some of them changed their minds about it as they worked through the problem. For example, Simon Gregg first thought that the answer is 1/4:

Upon further reflection, he changed his answer to 1/3:

There were some who used calculus to find a probability of 0.386. Gregg came to suspect they were right:

**My Attempts**

I approached the problem in a few ways myself. I thought about the sample space as sketched by Kaleb Allinson:

Assuming the spaghetti stick is one unit long, the horizontal axis represents the length of the shorter piece after the first break. Let’s call this . It must range between 0 and 1/2. The longer piece is then itself broken into a left piece with length and a right piece with length . I interpret the vertical axis in Allinson’s chart as the length . It could range between just above zero to nearly one if is very small, and it could range between between zero and around 1/2 if itself is close to 1/2. The length of the third side is not displayed in this diagram, but we can find it for any point by using .

The sample space of possible points combines the three right triangles in Allinson’s diagram. The shaded area in the middle represents the combinations of and that can lead to a spaghetti triangle. These combinations work because of the *triangle inequalities, *which require that no side be longer than the sum of the other two sides. Otherwise, the two shorter sides might just dangle pathetically off the long side like the arms of a T-Rex.

Allinson saw that one-third of the sample space corresponded to valid triangles, and so concluded that the probability of forming a triangle is 1/3. However, all parts of the diagram are not equally likely. All permitted values of are equally likely, but the distribution of depends on . When is very small, there is about a 50% chance that is less than 1/2. In contrast, when is near 1/2, there is almost a 100% change that is less than 1/2. The probability density in the diagram is not uniform; it increases from left to right.

In my mind, I pictured the probability density as a slanted roof above the diagram (in a third dimension). By some mental calculation I concluded that the probability of landing in the shaded area (what we might call the “triangle of triangles”) was 1/2.

To check my result, I made a spreadsheet to simulate many random spaghetti snaps. For each simulation, I sampled values for two random variables and , each uniformly distributed from 0 to 1. I used to set , the length of the shorter piece after the first break. We can let equal the lesser of and . We then find the next piece with . As mentioned above, the third piece . I set up the model to run 10,000 simulations and flag which ones could form triangles.

This is the result of one run with the and values plotted for each simulation. The blue dots indicate valid triangles. It looks like one-third of the points are good for triangles. However, the points are more crowded on the right than the left, and there are 3,823 blue dots out of 10,000 total. The blue triangle is denser than the orange areas and gets more than its share of the simulations.

Here is the same run plotted against the random variables and . The blue dots take up about 38% of the area of this evenly distributed chart.

I ran the spreadsheet several times and each time about 38% or 39% of the simulations allowed triangles. I had to doubt my answer of 1/2. I realized that the probability surface was not simply flat incline but something with a curve to it.

I communicated by Twitter and email with Mike Andrejkovics, a math teacher and math rapper(!) . He thought at first that the answer was 1/4 and he created a cool video to make his case. Mike then realized that his solution applied to a slightly different problem (see **Variations** below).

Like some of the other math teachers on Twitter, Mike used calculus to find the answer of .386 for the Marilyn Burns version of the problem. I decided to work through the calculus for myself. Mike had gotten me thinking some more about the probability density. None of the calculus solutions I had seen had provided an explicit formula for the probability density, but I thought that such a formula would clarify the problem for me.

Let be our probability density function (“pdf”) over the sample space in Kaleb Allinson’s diagram. This space ranges from 0 to 1/2 for the coordinate and from 0 to for the coordinate. We use to represent the length of the shorter piece after the first break. After the longer piece is broken into two pieces, the new piece on the left has length . The total probability over this space should equal one. In calculus notation, this means

or

.

The first statement integrates over the sample space . The second statement is equivalent. Working from the inside out, we begin by integrating as ranges from 0 to . This integral is basically just a function of , so we can write . Now our double integral becomes the single integral

.

We want to be constant, because we assume that is uniformly distributed from 0 to 1/2. The probability density in the sample space for each value of should be the same. Now if is constant, then it must equal 2 so that . Now we work back to the function . We need

,

which works with the pdf . Note that is a function of but not of . It grows along a curve from left to right.

Let’s look at this probability density function over the sample space. The volume enclosed by the figure is exactly one cubic unit, as required.

I like this shape. Every side could be made from a flat or rolled sheet of paper. Maybe architects should study probability density functions for inspiration!

Now let’s complete the solution to the spaghetti problem. We want to know the probability of landing in the shaded “triangle of triangles.” Call this region . Then the probability of making a triangle is

where represents the natural logarithm. I hope you don’t mind that I skipped over some of the integration steps at the end.

Let’s see what the solution looks like. Here is the probability density with the region for valid triangles colored in red. The volume of the red portion is 0.386.

**Variations**

The first version of the puzzle that Mike Andrejkovics solved appears as the “Walking Stick Puzzle” (#120) in The Puzzle Universe by Ivan Moscovich. This book was a great Christmas present from my wife. In the Walking Stick problem, the two breaks are made independently. It is not required that the second break occur along the longer piece after the first break.

I told my daughter (with whom I blog at Zeno’s Meatball) about the spaghetti puzzle. She interpreted the problem in physical terms. How do spaghetti strands really break? Aren’t they more likely to break near the middle than near the ends? Perhaps we grownups have been too simplistic with our uniform distributions.

Stayed tuned for a new variation on the puzzle that I will present in an upcoming post. I hope you enjoy it.

Copyright 2016. All Rights Reserved.

]]>