<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Walter&#039;s Little World</title>
	<atom:link href="https://walterstovall.online/feed/" rel="self" type="application/rss+xml" />
	<link>https://walterstovall.online/</link>
	<description>Personal interests</description>
	<lastBuildDate>Wed, 21 Jan 2026 19:05:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Projecting Social Security Benefits</title>
		<link>https://walterstovall.online/2026/01/21/projecting-social-security-benefits/</link>
					<comments>https://walterstovall.online/2026/01/21/projecting-social-security-benefits/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Wed, 21 Jan 2026 16:21:28 +0000</pubDate>
				<category><![CDATA[finances]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=30583</guid>

					<description><![CDATA[<p>So are cuts of the magnitude indicated by trust fund balances invevitable? No. That’s what will happen with no action by congress. Will congress act? Yes. It’s virtually inevitable. Allowing cuts of this size to current and near-term retiree benefits will be unthinkable politically. Will congress eliminate the cuts altogether? No. The funding challenge and its impact to the rest of the population is what they’ll be up against, and it’s big.</p>
<p>The post <a href="https://walterstovall.online/2026/01/21/projecting-social-security-benefits/">Projecting Social Security Benefits</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I just retired about a year ago. As part of an annual review, I&#8217;m taking a fresh look at the retirement plan and considering the validity of its projections and where they might need to be adjusted. Our retirement plan is impacted quite a lot by social security benefits. The reality is that those benefits are scheduled to be paid in the future, and there’s some risk that the payments won’t be as large as promised in social security income statements.</p>



<p>The projected income shown in the annual&nbsp;<em>Your Social Security Statement</em>&nbsp;assumes that the money being promised will also be available to be paid from the balance of the Social Security Trust Fund. Problem is, it looks clear that there will be challenges maintaining the balance in that fund in the coming years.</p>



<p>The main drivers affecting social security during our planned retirement years are:</p>



<ul class="wp-block-list">
<li><em>Boomers retiring + longer lifespans</em>. Results in more people collecting benefits for more years.</li>



<li><em>Lower birth rates in the post-boomer decades</em>. Results in fewer payroll-tax payers supporting the retired population.</li>
</ul>



<p>The Social Security Administration releases occasional projections of the trust fund balance. Based on the&nbsp;<a href="https://www.ssa.gov/oact/TRSUM/2024/index.html">2024 Social Security and Medicare Boards of Trustees Statement</a>, the retirement fund (OASI) will run out of money in 2033 (the combined trust funds – OASDI lasts till 2034). Once the fund is exhausted, benefit payments become limited to current payroll taxes, which are projected to be&nbsp;<em>79% of scheduled benefits</em>. Even&nbsp;<a href="https://www.ssa.gov/pubs/marketing/fact-sheets/will-social-security-be-there-for-me.pdf">more significant cuts</a>&nbsp;are on the table in subsequent years without action.</p>



<p>What’s the trend on trust fund projections? Well, the 2033 exhaustion&nbsp;<a href="https://apnews.com/article/social-security-medicare-trust-fund-trump-74e13292f510739724a555d7ded7c1a3">moves the year from a previous prediction of 2036</a>&nbsp;made in 2021. That sounds alarming. But this at least doesn’t appear to be something that will continue. There was lower-than-expected payroll tax revenue post-COVID, and labor force partipation didn’t rebound quickly, and many people retired early. There were also higher COLAs with the sudden spike in inflation (which rachets up benefits permanently). Finally there was also a decrease in life expectancy post-COVID and that shaped forecasts, but the real effect has been temporary so people are actually collecting benefits for longer.</p>



<p>So are cuts of the magnitude indicated by trust fund balances invevitable? <strong>No</strong>. That’s what will happen with no action by congress. Will congress act? <strong>Yes</strong>. It’s virtually inevitable. Allowing cuts of this size to current and near-term retiree benefits will be unthinkable politically. Will congress eliminate the cuts altogether? No. The funding challenge and its impact to the rest of the population is what they’ll be up against, and it’s big.</p>



<p>So what might congress do other than cut benefits?</p>



<ul class="wp-block-list">
<li><em>Increase payroll taxes</em>, currently at 12.4% (6.2 employee, 6.2 employer). A 1% increase (0.5 on each side) would bring in a lot of money. The longer this doesn’t happen, the more dramatic the fix needs to be as opposed to taxes phased in over several years.</li>



<li><em>Raise or eliminate the taxable wage cap</em>. Right now contributions cap at a $176K income level. Congress might change this by raising that cap, or by reapplying FICA to earnings above $400K (i.e. the “donut hole”) such as that proposed in the&nbsp;<a href="https://larson.house.gov/sites/evo-subsites/larson.house.gov/files/evo-media-document/overall-one-pager_0.pdf">Social Security 2100 Act</a>.</li>



<li><em>Tax certain non-wage income</em>. Taxes might close self-employment loopholes, tax some fringe benefits, or possibly investment income (politically difficult).</li>



<li><em>Raise full retirement age</em>.</li>



<li>Other, including modify wage indexing, reduce COLA, means testing.</li>
</ul>



<p>Most likely, what’s done will be a combination of these things, and only a partial job of restoring 100% of benefits. Each of these actions will see political push-back from large numbers of affected individuals. How all that shakes out is uncertain.</p>



<p><strong>To think there will be no cuts appears to be an unwise stragegy.</strong></p>



<h2 class="wp-block-heading">How big will the cuts be?</h2>



<p>That’s the million dollar question. Who knows? Predicting nothing doesn’t make sense. Predicting the full default cuts (at least for current/near-term retirees) is probably overly conservative.</p>



<p>In the table below, the&nbsp;<strong>default cuts</strong>&nbsp;reflect what current law requires if Congress does nothing. Once the trust fund is depleted around 2033, benefits must be limited to annual payroll tax, producing an immediate cut of roughly 20% that slowly worsens over time as demographic trends continue (these figures are less certain than the trust fund projection but are heading in the right direction). This path assumes&nbsp;<strong>zero</strong>&nbsp;political intervention, no additional revenue, and no changes to benefit formulas. Cuts on this scale have never occurred when a large, well-understood cliff was visible years in advance.</p>



<p>The&nbsp;<strong>predicted cuts</strong>, assume Congress behaves as it historically has: acting late but not allowing an abrupt, permanent reduction of the default magnitude for current and near-term retirees. In practice this usually means a combination of measures – raising the taxable wage cap, modest payroll tax increases, and slowing benefit growth for higher earners and future retirees. That should partially restore solvency, but may well not fully return benefits to 100% of scheduled levels. The result is a smaller, temporary shortfall around the depletion date that gradually stabilizes at a lower but politically tolerable level.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Year</th><th>Default cut (do nothing)</th><th>Predicted cut (likely action)</th></tr></thead><tbody><tr><td>2030</td><td>0%</td><td>0%</td></tr><tr><td>2031</td><td>0%</td><td>0%</td></tr><tr><td>2032</td><td>0%</td><td>0%</td></tr><tr><td><strong>2033</strong></td><td><strong>-20%</strong></td><td><strong>-15%</strong></td></tr><tr><td>2034</td><td>-20%</td><td>-14%</td></tr><tr><td>2035</td><td>-21%</td><td>-13%</td></tr><tr><td>2036</td><td>-21%</td><td>-12%</td></tr><tr><td>2037</td><td>-22%</td><td>-11%</td></tr><tr><td>2038</td><td>-22%</td><td>-10%</td></tr><tr><td>2040</td><td>-22%</td><td>-10%</td></tr><tr><td>2045</td><td>-24%</td><td>-10%</td></tr><tr><td>2050</td><td>-25%</td><td>-10%</td></tr><tr><td><strong>2054</strong></td><td><strong>-26%</strong></td><td><strong>-10%</strong></td></tr></tbody></table><figcaption class="wp-element-caption"><em>Percentages Post 2033 are speculative</em></figcaption></figure>



<p>To simplify planning it might make sense to assume a permanent cut of about 12% in 2033.</p>



<p class="has-small-font-size">I’m just an individual sharing my thoughts and analysis, not a financial professional. Do your own research or consult a qualified advisor before making financial decisions.</p>
<p>The post <a href="https://walterstovall.online/2026/01/21/projecting-social-security-benefits/">Projecting Social Security Benefits</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2026/01/21/projecting-social-security-benefits/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Going to the Hands Off march was mandatory</title>
		<link>https://walterstovall.online/2025/04/08/going-to-the-hands-off-march-was-mandatory/</link>
					<comments>https://walterstovall.online/2025/04/08/going-to-the-hands-off-march-was-mandatory/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Tue, 08 Apr 2025 17:07:28 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=30553</guid>

					<description><![CDATA[<p>The weather was glorius, but it was more than just getting some sunshine.</p>
<p>The post <a href="https://walterstovall.online/2025/04/08/going-to-the-hands-off-march-was-mandatory/">Going to the Hands Off march was mandatory</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The weather was glorius, but it was <a href="https://walterstovall.online/maps/hands-off-march-atl/">more than just getting some sunshine</a>.</p>
<p>The post <a href="https://walterstovall.online/2025/04/08/going-to-the-hands-off-march-was-mandatory/">Going to the Hands Off march was mandatory</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2025/04/08/going-to-the-hands-off-march-was-mandatory/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Atlanta and Cobb County join forces to extend the Silver Comet to downtown Atlanta</title>
		<link>https://walterstovall.online/2024/09/06/atlanta-and-cobb-county-join-forces-to-extend-the-silver-comet-to-downtown-atlanta/</link>
					<comments>https://walterstovall.online/2024/09/06/atlanta-and-cobb-county-join-forces-to-extend-the-silver-comet-to-downtown-atlanta/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Fri, 06 Sep 2024 12:20:35 +0000</pubDate>
				<category><![CDATA[outdoors]]></category>
		<category><![CDATA[bicycle]]></category>
		<category><![CDATA[nature]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=30456</guid>

					<description><![CDATA[<p>For several years now, organizers have fought hard to realize the dream of a continuous bike trail all the way from downtown Atlanta to Anniston Alabama. For a couple decades most of this has been in place, but still ending well outside the I-285 perimeter. The amazing Silver Comet and Chief Ladiga trails have existed ... <a title="Atlanta and Cobb County join forces to extend the Silver Comet to downtown Atlanta" class="read-more" href="https://walterstovall.online/2024/09/06/atlanta-and-cobb-county-join-forces-to-extend-the-silver-comet-to-downtown-atlanta/" aria-label="Read more about Atlanta and Cobb County join forces to extend the Silver Comet to downtown Atlanta">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2024/09/06/atlanta-and-cobb-county-join-forces-to-extend-the-silver-comet-to-downtown-atlanta/">Atlanta and Cobb County join forces to extend the Silver Comet to downtown Atlanta</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>For several years now, organizers have fought hard to realize the dream of a continuous bike trail all the way from downtown Atlanta to Anniston Alabama. For a couple decades most of this has been in place, but still ending well outside the I-285 perimeter. The amazing <a href="https://www.silvercometga.com/">Silver Comet</a> and <a href="https://www.silvercometga.com/chief-ladiga-trail/index-ladiga.shtml">Chief Ladiga</a> trails have existed in both Georgia and Alabama respectively since the 1990s. These (connected) trails form nearly the longest continuous paved bicycle path in the world! (now outdone by the <a href="https://www.dnr.state.mn.us/state_trails/paul_bunyan/index.html">Paul Bunyan State Trail</a>). While approaching Atlanta, the Silver Comet currently comes to an end in Cobb County just west of the Cobb East-West Connector. The <a href="http://www.connectthecomet.org/">Connect the Comet</a> project aims to extend the trail to the Chattahoochee River, and the City of Atlanta plans to take it from there, all the way to the <a href="https://beltline.org/visit/">Atlanta beltline</a>.</p>



<p>There are two main components to this project. One of those (the most critical in my opinion) is to extend the Silver Comet down to the Chattahoochee River. The above image shows the path the trail will take. The other piece is to see the City of Atlanta complete a trail that improves the &#8220;bicycle experience&#8221; south of the river towards Atlanta.</p>



<p>The reason I emphasize the Cobb extension is just based on my experience trying to ride to the <a href="https://www.silvercometga.com/silver-comet-cobb-county/silver-comet-mavell.shtml">start of the Silver Comet</a>, from downtown Atlanta. Atlanta&#8217;s trails get a good start towards the Silver Comet with the recently completed <a href="https://roughdraftatlanta.com/2024/02/14/ground-broken-for-first-segment-of-silver-comet-connector-in-atlanta/">Woodall Rail Trail</a> that enables a nice bicycle ride from downtown, up to <a href="https://beltline.org/parks-trails/westside-park/">Westside Park</a> on offroad trails. From there, bicycle friendly roads (like Gun Club Road, Hollywood Road) make for easy/pleasant riding to Atlanta Road and then across the Chattahoochee. At that point there&#8217;s abruptly, no more trails or bicycle lanes. Your only choice is to ride on a busy 4-lane with fast traffic and no buffer next to the road. It&#8217;s enough to shut me down &#8211; not my idea of fun.</p>



<h2 class="wp-block-heading">Those last couple miles to the Silver Comet</h2>



<p>The south end of the Silver Comet currently ends close to the East-West Connector. This will be extended mainly by paving a previous CSX railroad corridor to facilitate offroad travel down to Church Road. From there, the trail continues as a <em>sidepath</em> to Atlanta Road and down to the Chattahoochee River.</p>



<p><a href="https://s3.amazonaws.com/cobbcounty.org.if-us-east-1/s3fs-public/2024-02/Silver%20Comet%20Trail%20Extension%20Fact%20Sheet.pdf">See overview of the extension</a> and <a href="https://s3.amazonaws.com/cobbcounty.org.if-us-east-1/s3fs-public/2024-02/Silver%20Comet%20Trail%20Extension%20Concept%20Layout.pdf">more detail on segments</a></p>



<h2 class="wp-block-heading">Extending towards the Beltline</h2>



<p>Picking up at the Chattahoochee, the City of Atlanta has embraced a project that continues the trail from the river with a new offroad path under Marrieta Blvd to <a href="https://www.buckhead.com/parks/standing-peachtree-park/">Standing Peachtree Park</a>, and down to Marrietta Road.</p>



<figure class="wp-block-embed is-type-wp-embed is-provider-rough-draft-atlanta wp-block-embed-rough-draft-atlanta"><div class="wp-block-embed__wrapper">
<blockquote class="wp-embedded-content" data-secret="bHPbLQDg1r"><a href="https://roughdraftatlanta.com/2024/04/16/city-council-approves-6-5m-for-trail-designed-to-connect-downtown-atlanta-to-chattahoochee-river/">City council approves $6.5M for trail designed to connect Downtown Atlanta to Chattahoochee River</a></blockquote><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;City council approves $6.5M for trail designed to connect Downtown Atlanta to Chattahoochee River&#8221; &#8212; Rough Draft Atlanta" src="https://roughdraftatlanta.com/2024/04/16/city-council-approves-6-5m-for-trail-designed-to-connect-downtown-atlanta-to-chattahoochee-river/embed/#?secret=P8DxyxUz9V#?secret=bHPbLQDg1r" data-secret="bHPbLQDg1r" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe>
</div></figure>



<p>The trail from the river to Marietta Blvd is a small part of plans for <a href="https://www.11alive.com/article/news/local/funding-approved-new-riverside-trail-aims-connect-downtown-atlanta-chattahoochee-river/85-9ad432db-3dac-454d-857c-1438119ca1d0">100 miles of trails</a> expected to take a couple decades to complete.</p>



<h2 class="wp-block-heading">When will it be done?</h2>



<p>Both the Silver Comet Extension, and connecting the Chattahoochee to downtown are expected to be complete by the end of 2025, hopefully well ahead of the 2026 World Cup. I&#8217;ve never been one to put an emphasis on &#8220;showing off&#8221; for sporting events, but whatever works! I&#8217;m reminded of the Summer Olympics in 1996 being a big motivator for many to complete the <a href="https://www.pathfoundation.org/stone-mountain-trail">Stone Mountain Trail</a> nearly 30 years ago. Atlanta had virtually no good bicycle trails before that. What a transformation it&#8217;s been since then!</p>
<p>The post <a href="https://walterstovall.online/2024/09/06/atlanta-and-cobb-county-join-forces-to-extend-the-silver-comet-to-downtown-atlanta/">Atlanta and Cobb County join forces to extend the Silver Comet to downtown Atlanta</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2024/09/06/atlanta-and-cobb-county-join-forces-to-extend-the-silver-comet-to-downtown-atlanta/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Let&#8217;s give some thanks for the railroad</title>
		<link>https://walterstovall.online/2024/04/09/lets-give-some-thanks-for-the-railroad/</link>
					<comments>https://walterstovall.online/2024/04/09/lets-give-some-thanks-for-the-railroad/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Tue, 09 Apr 2024 12:48:12 +0000</pubDate>
				<category><![CDATA[outdoors]]></category>
		<category><![CDATA[bicycle]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=30377</guid>

					<description><![CDATA[<p>I&#8217;m out on one of my first days this spring, enjoying a bicycle ride from one end of Atlanta to the other. The quality of the experience is directly related to the repurposing of abandoned railroad lines in our city, primarily by the Path Foundation. Without the dedication of large tracts of real estate to ... <a title="Let&#8217;s give some thanks for the railroad" class="read-more" href="https://walterstovall.online/2024/04/09/lets-give-some-thanks-for-the-railroad/" aria-label="Read more about Let&#8217;s give some thanks for the railroad">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2024/04/09/lets-give-some-thanks-for-the-railroad/">Let&#8217;s give some thanks for the railroad</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I&#8217;m out on one of my first days this spring, enjoying a bicycle ride from one end of Atlanta to the other. The quality of the experience is directly related to the repurposing of abandoned railroad lines in our city, primarily by the <a href="https://www.pathfoundation.org/">Path Foundation</a>. Without the dedication of large tracts of real estate to railroad lines for about 150 years, it would be today be impossible to enjoy a quiet ride thru Atlanta, close to nature, and with a pretty privileged view of the &#8220;back side&#8221; of the city.</p>



<p>Railroads were designed to be out of sight, and with minimal interruption to pedestrians and street traffic. Quite a lot of expensive construction and landscaping went into that, more and more as you get into most densely populated areas. Ask yourself while driving through Atlanta and nearby, where do you sit and wait for trains to pass? Not a lot of places I&#8217;d imagine. But trains are getting around nonetheless. Freight trains remain an important part of our transportation grid. As population density goes up, trains more often go over dedicated bridges. When it goes up more, the trains in Atlanta usually go under the roads thru tunnels. If you&#8217;re looking ahead of you, you don&#8217;t even know they&#8217;re there. While railroads remain important, their role has certainly declined from it&#8217;s heyday in the early to mid-1900s. Economic realities in the changing landscape of transportation technology have resulted in more and more rail lines getting abandoned and left just sitting there.</p>



<p>While that opens a &#8220;land of opportunity&#8221; for me as a bicyclist riding the repurposed lines, happiness is not the only emotion I&#8217;m left with. My family was instrumental in helping build up railroads from the early days, to their rise in both passenger and freight transportation, and ultimately their decline, as trucks, automobiles, air travel, and (as always) water transport dominated. My father, Robert Stovall Jr, was the president of the <a href="https://www.msrailroads.com/C&amp;G.htm">Columbus and Greenville</a> (C&amp;G) railroad company. His father, Robert Stovall Sr, was the president before him, and his father, <a href="https://www.findagrave.com/memorial/26928750/adam-tonquin-stovall">Adam Tonquin Stovall</a> founded the C&amp;G. I came along when railroads were in decline. My first employment was at the railroad when I was ten years old. Most of what I remember about that is unloading railcars used to transport damaged groceries (like dented cans of food or bags or flour busted open). The C&amp;G would sell this food to farmers to feed their livestock, and I was there to help load it for them. Ultimately the C&amp;G was sold to Illinois Central in about 1972, as they couldn&#8217;t continue given the economic conditions, as small operations like ours got bought up in big mergers.</p>



<p>But today I&#8217;m out on my bicycle, and feeling a special kinship with the railroad as I ride the <a href="https://beltline.org/places-to-go/southside-trail/">Southside Beltline Trail</a> towards downtown Atlanta. Guthrie&#8217;s song rings in my head as I cruise along, <a href="https://www.youtube.com/watch?v=nQ0P-jaCFi4">rolling past houses, farms, and fields</a>. Riding mile after mile, the landscape and views change constantly. There&#8217;s no need to hurry or get out of anybody&#8217;s way. The scenery includes riding alongside the backyards of neighborhoods, the backside of factories and other industry, and with a up-close and personal view of city infrastructure like big powerplants and other stuff you&#8217;re not meant to see. The advertising you see is not there for you. It&#8217;s a show of pride by the small business owners that built that place. In some cases what you see ahead might as well be rural. And all the while, you breeze past big and small roads and interstates as though they&#8217;re not even there, increasingly thru long tunnels that seem like they were all constructed just for your enjoyment. They were in fact built at great monetary cost and labor that was expended for a completely different reason of course. There&#8217;s something sort perfect about that for me. A chance to be both sad and happy, and with uninterrupted solitude in which to experience that to the fullest.</p>



<div class="wp-block-envira-envira-gallery"><div class="envira-gallery-feed-output"><img decoding="async" class="envira-gallery-feed-image" tabindex="0" src="https://walterstovall.online/wp-content/uploads/2024/04/SouthsideBeltline1-768x1024.jpeg?x52476" title="Near the start of the Southside Beltline" alt="" /></div></div>
<p>The post <a href="https://walterstovall.online/2024/04/09/lets-give-some-thanks-for-the-railroad/">Let&#8217;s give some thanks for the railroad</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2024/04/09/lets-give-some-thanks-for-the-railroad/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Split DNS done right using opnsense</title>
		<link>https://walterstovall.online/2024/01/06/split-dns-done-right-using-opnsense/</link>
					<comments>https://walterstovall.online/2024/01/06/split-dns-done-right-using-opnsense/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Sat, 06 Jan 2024 16:45:25 +0000</pubDate>
				<category><![CDATA[tech]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=30359</guid>

					<description><![CDATA[<p>I can&#8217;t get over how simple and powerful my OPNsense router is. It&#8217;s almost as easy to setup as any consumer router as long as you know to leave stuff alone that you don&#8217;t understand. I recently setup OPNsense on a Protectli VP420 and I&#8217;ve been real happy with it. Running a home lab with ... <a title="Split DNS done right using opnsense" class="read-more" href="https://walterstovall.online/2024/01/06/split-dns-done-right-using-opnsense/" aria-label="Read more about Split DNS done right using opnsense">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2024/01/06/split-dns-done-right-using-opnsense/">Split DNS done right using opnsense</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I can&#8217;t get over how simple and powerful my <a href="https://opnsense.org/">OPNsense router</a> is. It&#8217;s almost as easy to setup as any consumer router as long as you know to leave stuff alone that you don&#8217;t understand. I recently setup OPNsense on a <a href="https://protectli.com/product/vp2420/">Protectli VP420</a> and I&#8217;ve been real happy with it.</p>



<p>Running a <a href="https://linuxhandbook.com/homelab/">home lab</a> with public facing services, you run into the problem of <a href="https://networkinterview.com/split-domain-name-system-split-dns/">Split DNS</a>. Any name, like my home.stovallhut.online webpage, needs to be registered with a public IP address to reach it over the internet. Problem is, if you&#8217;re at home then you should be contacting a local address on your network (some routers let you use <a href="https://www.techtarget.com/searchunifiedcommunications/definition/hairpinning">reflection/hairpinning</a> to get around that but this has its own issues). My OPNsense router makes this pretty easy to manage with its <a href="https://docs.opnsense.org/manual/unbound.html">Unbound DNS service and dns overrides</a>.</p>



<p>That all works pretty good but the icing on the cake came when I figured out (with the help of <a href="https://forum.opnsense.org/index.php?topic=9245.0">a Great Guide</a>) how to forward queries to my local DNS even when the client software specifically requested a different DNS server. Like if the client sends DNS queries to google&#8217;s public DNS at 8.8.8.8, then my router will now STILL handle the request if it can locally without contacting google. And if it does contact a public server, it won&#8217;t be google, and it will go out using <a href="https://homenetworkguy.com/how-to/configure-dns-over-tls-unbound-opnsense/">DNS over TLS</a> so my searches are private (at least to third parties like my ISP).</p>



<p>Amazing device <img decoding="async" height="16" width="16" src="https://static.xx.fbcdn.net/images/emoji.php/v9/t4c/1/16/1f642.png" alt="&#x1f642;"></p>
<p>The post <a href="https://walterstovall.online/2024/01/06/split-dns-done-right-using-opnsense/">Split DNS done right using opnsense</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2024/01/06/split-dns-done-right-using-opnsense/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Automatically backup your opnsense router to a Synology NAS via SFTP</title>
		<link>https://walterstovall.online/2023/11/13/automatically-backup-your-opnsense-router-to-a-synology-nas-via-sftp/</link>
					<comments>https://walterstovall.online/2023/11/13/automatically-backup-your-opnsense-router-to-a-synology-nas-via-sftp/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Mon, 13 Nov 2023 13:26:03 +0000</pubDate>
				<category><![CDATA[tech]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=30025</guid>

					<description><![CDATA[<p>I recently setup an opnsense router to handle network traffic for my home lab. There&#8217;s a lot of configuration done at opnsense in terms of interfaces, IP reservations, unbound DNS, firewall settings, VPN, security certificates, and more. Aside from some manual backup options, how can all this configuration be preserved if the hardware fails? Well, ... <a title="Automatically backup your opnsense router to a Synology NAS via SFTP" class="read-more" href="https://walterstovall.online/2023/11/13/automatically-backup-your-opnsense-router-to-a-synology-nas-via-sftp/" aria-label="Read more about Automatically backup your opnsense router to a Synology NAS via SFTP">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2023/11/13/automatically-backup-your-opnsense-router-to-a-synology-nas-via-sftp/">Automatically backup your opnsense router to a Synology NAS via SFTP</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I recently setup an <a href="https://opnsense.org/">opnsense router</a> to handle network traffic for my home lab. There&#8217;s a lot of configuration done at opnsense in terms of interfaces, IP reservations, unbound DNS, firewall settings, VPN, security certificates, and more. Aside from some manual backup options, how can all this configuration be preserved if the hardware fails? Well, opnsense has a <a href="https://docs.opnsense.org/manual/settingsmenu.html#cron">cron-job</a> facility that lets you schedule backups and other activity. This just doesn&#8217;t work for me &#8211; there&#8217;s an option there to create a Remote Backup job that will backup the opnsense settings to a <a href="https://en.wikipedia.org/wiki/GitHub">github server</a> and that&#8217;s it.</p>



<p>I&#8217;m not running a github server in my lab, and I&#8217;d rather not store this on the <a href="https://github.com/">public github server</a> even if my account is supposedly secure (anything can be compromised). The configuration is very sensitive, including security certificates, user accounts and passwords, firewall rules and more. I&#8217;d prefer to backup the configuration to my <a href="https://www.synology.com/en-us">Synology NAS</a> via <a href="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol">SFTP</a>. But the opnsense <em>Command</em> prompt offers no such selection. Github is it!</p>



<p>Stuck right? <mark style="background-color:var(--accent)" class="has-inline-color has-base-3-color">No!</mark> While it looks like I&#8217;m captive to what&#8217;s available in the gui, I figured out how to pull this off after a good bit of hunting and educating myself on some internals of opnsense and the <a href="https://en.wikipedia.org/wiki/FreeBSD">FreeBSD</a> OS it runs on top of. One avenue (that is in fact a dead-end) is to make a SSH connection with the opnsense router and setup a cron job using the command line <em>crontab -e</em>. This will appear successful at first, <span style="text-decoration: underline;">but if you modify ANY cron job settings in the opnsense gui, your cron job will be removed!</span></p>



<p>There&#8217;s actually a way to pretty easily extend the picklist of Commands you see in the opnsense GUI to include a custom job that does exactly what I want. This post describes the steps to do that.</p>



<h2 class="wp-block-heading">Objectives</h2>



<p>The steps I show below will first configure the Synology to accept SFTP connections for the purpose of sending it XML configuration backups from opnsense. We&#8217;ll setup a new user on the NAS called <em>opnsense_backup</em>. This user has limited privileges, allowing it only read/write access to a new folder where the backups will be stored. For security, the SFTP login that opnsense executes will authenticate thru <a href="https://www.ssh.com/academy/ssh-keys">SSH keys</a> (password prompts would not work, there&#8217;s nobody to type it in). The backup is done by a script that you can install on opnsense and register as a new cron-job type that the opnsense gui recognizes. Finally, <a href="https://docs.opnsense.org/manual/settingsmenu.html#cron">in the opnsense gui we&#8217;ll setup the cron job</a> to execute the backup with the desired frequency.</p>



<h2 class="wp-block-heading">Skills you need</h2>



<p>To execute the steps in this post you need be just basically familiar with the <a href="https://www.synology.com/en-us/dsm">DSM</a> gui at the NAS, and also the <a href="https://docs.opnsense.org/">opnsense gui</a> for the router. You need to be able to make a <a href="https://en.wikipedia.org/wiki/Secure_Shell">SSH</a> connection to the NAS with admin privileges, and also a SSH connection to the opnsense router with root access. You need just a basic understanding of <a href="https://www.howtogeek.com/102468/a-beginners-guide-to-editing-text-files-with-vi/">using vi</a> (no editing of files, just create new ones).</p>



<h2 class="wp-block-heading">Create NAS backup destination and user account</h2>



<p>In DSM at the NAS, go to <em>Control Panel -&gt; Shared Folder</em> and create a new shared folder called opnsense_backup. This folder will collect the XML backup files from opnsense. As shown later, the files are all time-stamped with unique names that identify the date/time of the backup, and we&#8217;ll setup file-rotation so backups older than 30 days get deleted. Remember to add this new share to your <a href="https://kb.synology.com/en-global/DSM/tutorial/Quick_Start_Hyper_Backup">HyperBackup</a>/other backup software to further protect these files.</p>



<p>At <em>Control Panel -> File Services -> FTP</em> make sure the SFTP service is enabled. At <em>Control Panel -> Users and Groups</em> <a href="https://kb.synology.com/en-us/DSM/tutorial/user_enable_home_service">make sure user homes are enabled</a>, then create a new opnsense_backup user account. Give the user a strong password. For privileges, give this account access to its own home folder, the opnsense_backup share created above, and the SFTP application itself. This will all &#8220;limit the damage&#8221; should this login ever be compromised.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="847" height="430" src="https://walterstovall.online/wp-content/uploads/2023/11/image-1.png?x52476" alt="" class="wp-image-30033" srcset="https://walterstovall.online/wp-content/uploads/2023/11/image-1.png 847w, https://walterstovall.online/wp-content/uploads/2023/11/image-1-300x152.png 300w, https://walterstovall.online/wp-content/uploads/2023/11/image-1-768x390.png 768w" sizes="(max-width: 847px) 100vw, 847px" /></figure>



<p>At a SSH prompt on the NAS, execute the following commands to create a destination for SSH keys.</p>



<pre class="wp-block-code"><code># Create location to store opnsense_backup user's keys
mkdir /var/services/homes/opnsense_backup/.ssh
touch /var/services/homes/opnsense_backup/.ssh/authorized_keys</code></pre>



<h2 class="wp-block-heading">Configure opnsense to make secure SFTP connections</h2>



<p>At a SSH prompt on opnsense, generate a new SSH key using the keygen command.</p>



<pre class="wp-block-code"><code># Generate a key
ssh-keygen
# accept default filename /root/.ssh/id_rsa at prompt, supply no passphrase</code></pre>



<p>When keygen runs it will prompt you for a filename to store the key in. I accepted the default of <em>/root/.ssh/id_rsa</em>. Take note of the name and use it below if yours is different. You&#8217;ll also be prompted to enter a passphrase. Do <strong>NOT</strong> do that &#8211; just press enter. Passphrases are generally a good idea, but not in this case where the job is automated, with nobody to anwer a passphrase-prompt.</p>



<p>Now enter the following command to copy the generated key to the NAS. You&#8217;ll be prompted for a password &#8211; supply the password created on the NAS for the opnsense_backup user. Note my use of port 22 below &#8211; if your SFTP is configured with a different port then use that. Replace <em>&lt;your-NAS-hostname&gt;</em> with the hostname or IP address of your NAS.</p>



<pre class="wp-block-code"><code># Copy key to NAS
scp -P 22 /root/.ssh/id_rsa.pub opnsense_backup@<mark style="background-color:var(--accent)" class="has-inline-color has-base-3-color">&lt;your-NAS-hostname&gt;</mark>:/home/</code></pre>



<p>When that command completes, you should find the id_rsa.pub file in the opnsense_backup user&#8217;s home folder on the NAS.</p>



<h2 class="wp-block-heading">Register SSH key at the NAS</h2>



<p>We&#8217;ve stored the id_rsa.pub on the NAS above, now it needs to be properly registered as a SSH key for this user on the NAS.</p>



<p>In a SSH session on the NAS, login as admin and execute the following commands to append the new key to the <em>authorized_keys</em> file we created earlier. Also, it&#8217;s critically important to set the file permissions and ownership (chmod/chown below). <strong>SFTP will NOT accept the connection if you don&#8217;t do this!</strong></p>



<pre class="wp-block-code"><code># Save the key into authorized_keys file then remove and set permissions
cat /volume1/homes/opnsense_backup/id_rsa.pub &gt;&gt; /volume1/homes/opnsense_backup/.ssh/authorized_keys
rm /volume1/homes/opnsense_backup/id_rsa.pub
chmod 700 /volume1/homes/opnsense_backup/.ssh
chmod 600 /volume1/homes/opnsense_backup/.ssh/authorized_keys
chown -R opnsense_backup:users /volume1/homes/opnsense_backup/.ssh</code></pre>



<p><em>Note that similar steps are shown in the Synology <a href="https://kb.synology.com/en-uk/DSM/tutorial/How_to_log_in_to_DSM_with_key_pairs_as_admin_or_root_permission_via_SSH_on_computers">knowledge-base article for managing SSH keys</a> where you just use File Station and no SSH work like I have above. The problem is <span style="text-decoration: underline;">the documented steps only work for SFTP logins by a admin user!</span> Do what I show above for preparing .ssh and authorized_keys, and it will work for <strong>any</strong> user.</em></p>



<h2 class="wp-block-heading">Test doing the backup</h2>



<p>At a SSH prompt in opnsense, store the following script into a backup_opnsense.sh file. The location of the script is not important yet &#8211; just put it in your user home.</p>



<pre class="wp-block-code"><code>#!/bin/sh
DATE=$(date +%Y-%m-%d-%H%M)
BACKUP_FILE="/root/config-backup-$DATE.xml"
cp /conf/config.xml $BACKUP_FILE
scp $BACKUP_FILE opnsense_backup@stovallhut.online:/opnsense_backup/
rm $BACKUP_FILE</code></pre>



<p>Remember to make the script executable:</p>



<pre class="wp-block-code"><code>chmod +x backup_opnsense.sh</code></pre>



<p>Execute the script as a test.</p>



<pre class="wp-block-code"><code>./backup_opnsense.sh</code></pre>



<p>On this first run you&#8217;ll probably see a warning about the unknown ssh signature &#8211; press enter to ignore the warning and accept the SSH connection. This won&#8217;t show when you run the script again in the future.</p>



<p>You should <strong>NOT</strong> be prompted for a password. If you are, then something is not right with the above steps when it comes to properly registering the key on the NAS. Review and debug before moving forward.</p>



<p>If all went well, you&#8217;ll find a new XML backup in the opnsense_backup share on your NAS!</p>



<h2 class="wp-block-heading">Add new cron job type recognized by the opnsense gui</h2>



<p>We have a script that can backup the configuration now, but it does not run automatically on a schedule. Let&#8217;s fix that. We&#8217;ll use the above script as the basis for a new type of cron command we can schedule in the opnsense gui.</p>



<p>At a SSH connection as root on opnsense, execute the steps below. This will relocate the script in the FreeBSD file system to the /usr/local/etc folder so it&#8217;s part of the opnsense environment. File permissions are set as required. Then we&#8217;ll create an actions_sftp_backup.conf file to register our script as a cron job.</p>



<pre class="wp-block-code"><code># Move script...
mv backup_opnsense.sh /usr/local/etc/backup_opnsense.sh
# Set file permissions
chmod 700 /usr/local/etc/backup_opnsense.sh
# Add as opnsense action
vi /usr/local/opnsense/service/conf/actions.d/actions_sftp_backup.conf</code></pre>



<p>Paste the following content into the actions_sftp_backup.conf at the <em>vi</em> prompt.</p>



<pre class="wp-block-code"><code>&#91;sftp_backup]
command:/usr/local/etc/backup_opnsense.sh
parameters:
type:script
message:Starting backup script
description:Backup config to NAS
</code></pre>



<p>Finally, restart the configd service so the new cron job will be recognized in the gui.</p>



<pre class="wp-block-code"><code># Restart configd service to expose new config
service configd restart
</code></pre>



<h2 class="wp-block-heading">Create cron job to periodically backup opnsense configuration</h2>



<p>At this point we&#8217;re nearly done. In the opnsense web console, go to <em>System -&gt; Settings -&gt; Cron</em> and press the plus (+) button to add a new job.</p>



<p>Fill in the time to run the job. In my example I&#8217;m executing the script at 11:28 every night.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="570" height="410" src="https://walterstovall.online/wp-content/uploads/2023/11/image-2.png?x52476" alt="" class="wp-image-30034" srcset="https://walterstovall.online/wp-content/uploads/2023/11/image-2.png 570w, https://walterstovall.online/wp-content/uploads/2023/11/image-2-300x216.png 300w" sizes="auto, (max-width: 570px) 100vw, 570px" /></figure>



<p>On the above <em>Command</em> picklist, you should see a new <em>Backup config to NAS</em> you can pick. Choose that and it will execute the SFTP backup script we prepared above.</p>



<p>To test the cron job, see in the gui how you can clone your job &#8211; this is an easy way to create a job that will run in about one minute so you can make sure this works. Let that happen, confirm you get a new backup in the opnsense_backup network share on the NAS. Then just delete your clone when you&#8217;ve seen it work.</p>



<h2 class="wp-block-heading">Trim the backups so only the last 30 days are saved</h2>



<p>With backups happening every day, the opnsense_backup share on the NAS will consume more and more space over time. See steps below that will fix that by running a script on the NAS each day that deletes backups that are older than 30 days.</p>



<p>On the NAS go to <em>Control Panel -&gt; Task Scheduler</em>. Create a new task based on a user-defined script. Select to execute the job as root. On the schedule tab, set the frequency you want to run the task such as daily. On the task tab, paste the following content:</p>



<pre class="wp-block-code"><code>#!/bin/sh
# Path to the backup directory
BACKUP_DIR="/volume1/opnsense_backup"

# Delete files older than 30 days
find $BACKUP_DIR -name "*.xml" -mtime +30 -exec rm {} \;
</code></pre>



<p>We&#8217;re done! Like anything, test this for proper behavior. You should see backups getting saved every day into the new opnsense_backup share on the NAS. Make a point to come back later and make sure old ones are getting deleted as expected.</p>
<p>The post <a href="https://walterstovall.online/2023/11/13/automatically-backup-your-opnsense-router-to-a-synology-nas-via-sftp/">Automatically backup your opnsense router to a Synology NAS via SFTP</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2023/11/13/automatically-backup-your-opnsense-router-to-a-synology-nas-via-sftp/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>My early days grokking the wonder of microprocessors</title>
		<link>https://walterstovall.online/2023/10/12/my-early-days-grocking-the-wonder-of-microprocessors/</link>
					<comments>https://walterstovall.online/2023/10/12/my-early-days-grocking-the-wonder-of-microprocessors/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Thu, 12 Oct 2023 14:28:28 +0000</pubDate>
				<category><![CDATA[personal]]></category>
		<category><![CDATA[tech]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=29269</guid>

					<description><![CDATA[<p>I started programming computers in 1977 and was quickly addicted. It was simply amazing to me, the power that was available literally at my fingertips. I was limited only by my understanding. Making the computer do what I specifically wanted it to (at first even the most trivial things), took long days and nights as ... <a title="My early days grokking the wonder of microprocessors" class="read-more" href="https://walterstovall.online/2023/10/12/my-early-days-grocking-the-wonder-of-microprocessors/" aria-label="Read more about My early days grokking the wonder of microprocessors">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2023/10/12/my-early-days-grocking-the-wonder-of-microprocessors/">My early days grokking the wonder of microprocessors</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I started programming computers in 1977 and was quickly addicted. It was simply amazing to me, the power that was available literally at my fingertips. I was limited only by my understanding. Making the computer do what I specifically wanted it to (at first even the most trivial things), took long days and nights as I struggled with all the new abstractions. I didn&#8217;t even know what was meant by a &#8220;variable&#8221;. I&#8217;m not joking either. I&#8217;m not talking about syntax, I&#8217;m talking about basic concepts. Defining one, giving it a value, inspecting its value, and especially using it to control the flow of a program were all brand new to me. I could stare at just one page of the little book that came with my <a href="https://en.wikipedia.org/wiki/TRS-80">TRS-80</a> for hours &#8211; and then do it again the next day. Once you&#8217;ve learned a programming language, they start falling like dominos. The first one was really hard though. It took me months to move beyond <a href="https://en.wikipedia.org/wiki/BASIC">BASIC</a>. But trying to write video games on a computer with 4K of RAM was motivation to do so!</p>



<p>In the early &#8217;80s after a few years of obsessing at home writing video games and the like, I managed to get a job at <a href="https://en.wikipedia.org/wiki/Scientific_Atlanta">Scientific Atlanta</a> (2nd job) through a lab-tech friend of mine that I had been interacting with, learning more about digital electronics. At the time, Scientific Atlanta (and most of the world) was early in the process of using <a href="https://en.wikipedia.org/wiki/Microprocessor">microprocessor</a>s in the devices they manufactured. I was tasked with writing the <a href="https://en.wikipedia.org/wiki/Firmware">firmware</a> for a new antenna position tracking device. Like the name implies, this was a seemingly simple device with a digital display that showed the angle of rotation for a microwave antenna. Position indicators were used primarily during construction of new microwave antennas, where prototypes would receive signals while being positioned at virtually all possible <a href="https://en.wikipedia.org/wiki/Spherical_coordinate_system">azimuth and elevation</a> angles, while recording equipment would store <a href="https://en.wikipedia.org/wiki/Amplitude_modulation">amplitude modulation</a> data that was also correlated with the position in an <a href="https://en.wikipedia.org/wiki/Antenna_measurement">antenna test range</a><em>.</em></p>



<p>The work was led primarily by Bob Hyers who was the <a href="https://www.indeed.com/career-advice/finding-a-job/principal-engineer-vs-senior-engineer">principal engineer</a>. Beyond Bob&#8217;s detailed work on the high level design, the real work was done by two engineers Steve and Charles. We quickly became good friends, and they taught me how to work with the prototypes they were building in the lab, as we worked on getting things working a little more every day. Steve and Charles were responsible for defining exactly what components to put in the unit, and they laid out circuit boards and all interconnections. Then came me &#8211; the unit was not going to do anything at all in the way of indicating or transmitting antenna positions without somebody to write the necessary software. I learned from the ground-up how to debug code in a lab environment with no user interface, just an oscilloscope and some pretty amazing debugging tools (<a href="https://en.wikipedia.org/wiki/Logic_analyzer">logic analyzer</a>) that were more powerful than anything I used before or since on traditional computers. My software handled every button press, changing display modes, actually calculating the position of the antenna as a decimal angle of degrees, and displaying the position or transmitting it to other test-range equipment in a timely manner.</p>



<p>It was on the last point of having <em>timely</em> position that was at the heart of most of the complexity in the design. Scientific Atlanta already had a position indictor at the time and used it heavily. The positions it indicated were perfectly accurate too. The problem was that the position was not calculated frequently enough. Since the antenna in the test range is moving, by the time you get position data it&#8217;s already out of date due to <a href="https://en.wikipedia.org/wiki/Group_delay_and_phase_delay">group delay</a>. So to make a good graph of position vs. amplitude modulation, the antenna had to be moved very very slowly as thousands of data points were recorded. This took a long time to do. Then the antenna might undergo some changes to make improvements, and the whole process had to be repeated. The problem was further complicated when the signals being transmitted were distant from the antenna receiving the data. The new position indicator would radically improve on these problems.</p>



<p>None of the engineers knew a thing about writing computer code. So in that sense I had a blank slate. But the algorithm was still well conceived before I came onboard. The goal was to generate accurate position data every 200 nanoseconds (ns). When I first heard this, it sounded like an impossible problem for a microprocessor of the day. The hardware design was based around an Intel CPU with a hardware clock ticking at 5 MHZ, which happens to be every 200 ns. But it was in fact impossible for the processor to calculate <em>anything</em> in just one tick of the clock! Even a single <a href="https://en.wikipedia.org/wiki/Microcode">microcode</a> instruction can&#8217;t execute in a single tick. But learning more, I discovered that the design was not as crazy as it sounded. It was also an opportunity for me to let my programming abilities expand a lot given the nontrivial behavior that would be called for to pull it off.</p>



<p>The hardware of the <em>1885/86 Antenna Position Indicator</em> as this model was known, included a &#8220;rate counter&#8221;. This was a device that could indicate a changing angle, and steadily increment that value by perhaps a few thousands of a degree every 200 ns. &#8211; all with literally no action taken by the microprocessor. As intelligent as the rate counter was (including wrapping a 359.9999 position to 0.0000 at the next increment), the rate counter did not <em>know</em> what angle to display and did not know what increment to apply to it at the next clock tick. Telling the rate counter what to do was the job of the firmware I was to write. But in order to save manufacturing costs, my ability to control the rate counter was very limited. I could in fact not even tell the counter what angle to display! All my software could do was to change the increment that the rate counter would apply at each clock tick. Essentially, I could find out what position the counter was indicating &#8220;now&#8221; (more on that later). And I knew what increment it was currently applying at each tick (by remembering what I last told it to use). And finally, I knew the current position of the antenna. So I have these three inputs: <em>the angle being displayed, the actual angle of the antenna, and the increment being used currently</em>. It was up to my software to now generate one little piece of information &#8211; what should be the new increment the rate counter was using? That was my single means of controlling the position output. So it was up to me to see that the indicated position would reasonably quickly converge on an accurate indication of antenna angle.</p>



<p>While the hardware seemingly tied my hands behind my back, this was my favorite kind of thing to code. I want nothing more than a seemingly impossible problem that in fact has a solution. Don&#8217;t get me wrong though, the problem had been figured out already, including very detailed mathematics that my software should apply in the solution. But as always, there&#8217;s a million miles between a concept and a working device. A successful project takes competent engineers building the device and for a reasonable cost that turns a profit for the manufacturer.</p>



<p>The firmware I designed to do this was organized around a core software routine that was <a href="https://en.wikipedia.org/wiki/Interrupt">interrupt driven</a>. Every 16 milliseconds (ms) the hardware would raise the interrupt-line on the CPU. Doing so would cause the CPU to execute an INT instruction. This meant the CPU would save its current instruction address (i.e. the address of the next instruction it plans to execute) on the <a href="https://en.wikipedia.org/wiki/Call_stack">system stack</a>. Then it would immediately jump to the address of the interrupt handler (pointed to by a table in the first page of RAM). There, was a tight little piece of code I wrote in <a href="https://en.wikipedia.org/wiki/Assembly_language">assembly language</a> (but most of the code was written in <a href="https://en.wikipedia.org/wiki/C_(programming_language)">C-language</a>). The function of the interrupt handler was to schedule the most high priority task defined in my self-designed SAMOS (Scientific Atlanta <a href="https://en.wikipedia.org/wiki/Computer_multitasking">Multitasking Operating System</a>). There were various tasks for the software to regularly perform, including updating the display, responding to button presses by the operator, transmitting position output to the front panel or other devices on the serial I/O bus, and miscellaneous maintenance and diagnostics tasks.</p>



<p>As its last step the interrupt handler transferred control to the SAMOS scheduler to execute the current highest-priority task and that happened to pretty much always be updating the position. To calculate the angle, the code would read a couple values from the <a href="https://en.wikipedia.org/wiki/Synchro">synchro</a> that was attached to the base of the antenna. These values were not in constant motion, like the antenna itself. Instead they were <a href="https://www.geeksforgeeks.org/latches-in-digital-logic/#">latched</a> at the time of the interrupt immediately beforehand. By applying some trigonometry to ratios from the synchro, it was possible to generate a digital angle in degrees. Like many things though, even that was a lot harder than it sounds given the <a href="https://en.wikipedia.org/wiki/Real-time_computing">real-time</a> requirements of the device. The trigonometry calculations could naturally be done with <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic">floating point math</a>. But to do that would mean using a floating point library, given the limits of the instruction set on the <a href="https://en.wikipedia.org/wiki/Intel_8088">8088</a> processor. That would have been way too slow. If, in the execution of this task, the software takes too long to calculate the angle and the new rate and along comes another interrupt signal then <strong>you&#8217;ve failed!</strong> That&#8217;s an overspeed-condition. It&#8217;s absolutely essential to prevent that. Downrange measurements will be wrong to a degree that&#8217;s supposed to be impossible.</p>



<p>Various techniques were used to multiply or divide without using floating point arthmetic. In some cases the code just looked up a number in say a 1KB table stored in <a href="https://en.wikipedia.org/wiki/Read-only_memory">ROM</a> (truncating index bits as necessary, but also often using them to skew the looked-up table entry). Another simple example would be to use the processor&#8217;s <em>shift</em>-left instruction to shift a variable by a small number of bits (where each shift is a multiply-by-2), then multiply or divide integer variables (usually by a pre-calculated constant that&#8217;s also been shifted). That generates an integer result, then finally use the <em>shift-</em>right (divide by 2) to get the result back in the intended scale. It takes a lot of care with the scale of the numbers to avoid <a href="https://www.welivesecurity.com/2022/02/21/integer-overflow-how-it-occur-can-be-prevented/">overflow</a> conditions, while still achieving accuracy requirements.</p>



<p>As always, <em>latching</em> was key to synchronizing the calculations done in the firmware with the time of the moving antenna. Input latching means that the software is always calculating using numbers that are now obsolete given the moving antenna, but were known to be accurate at a specific moment in the recent <em>past</em>. Output latching means the software calculates updated rate values the equipment should now apply, but the change is only applied at a well defined moment in the near <em>future</em>. All this makes it possible to make precise measurements and control, in spite of the variable response times in the software.</p>



<p>As a point of real pride, I was named on the United States Patent 4,853,839 for <em>Antenna Position Tracking Apparatus and Methods</em>. The basis for calling this a new invention, is that for the first time this results in a position indicator with <span style="text-decoration: underline;">zero group delay</span> for an antenna moving at a constant velocity. It does that essentially by predicting where the antenna <span style="text-decoration: underline;">will be in the near future</span>, rather than always showing obsolete position data. I&#8217;m named as one of four inventors &#8211; Steven Nichols, Robert Hyers, Walter Stovall, and Charles Trawick. Good times I&#8217;ll always remember fondly.</p>


<div class="_3d-flip-book  fb3d-fullscreen-mode full-size" data-id="29275" data-mode="fullscreen" data-title="false" data-template="short-white-book-view" data-lightbox="dark-shadow" data-urlparam="fb3d-page" data-page-n="0" data-pdf="" data-tax="null" data-thumbnail="" data-cols="3" data-book-template="default" data-trigger=""></div><script type="text/javascript">window.FB3D_CLIENT_DATA = window.FB3D_CLIENT_DATA || [];FB3D_CLIENT_DATA.push('eyJwb3N0cyI6eyIyOTI3NSI6eyJJRCI6MjkyNzUsInRpdGxlIjoiVS5TLiBQYXRlbnQgLSBBbnRlbm5hIFBvc2l0aW9uIFRyYWNraW5nIEFwcGFyYXR1cyBhbmQgTWV0aG9kcyIsInR5cGUiOiJwZGYiLCJyZWFkeV9mdW5jdGlvbiI6IiIsImJvb2tfc3R5bGUiOiJmbGF0IiwiYm9va190ZW1wbGF0ZSI6Im5vbmUiLCJvdXRsaW5lIjpbXSwiZGF0YSI6eyJwb3N0X0lEIjoiMjkyNzYiLCJndWlkIjoiaHR0cHM6XC9cL3dhbHRlcnN0b3ZhbGwub25saW5lXC93cC1jb250ZW50XC91cGxvYWRzXC8yMDIzXC8xMFwvVS5TLi1QYXRlbnQtNDg1MzgzOS5wZGYiLCJwZGZfcGFnZXMiOiIzNyIsInBhZ2VzX2N1c3RvbWl6YXRpb24iOiJub25lIn0sInRodW1ibmFpbCI6eyJkYXRhIjp7InBvc3RfSUQiOiIwIn0sInR5cGUiOiJhdXRvIn0sInByb3BzIjp7ImJhY2tncm91bmRDb2xvciI6ImF1dG8iLCJiYWNrZ3JvdW5kSW1hZ2UiOiJhdXRvIiwiYmFja2dyb3VuZFN0eWxlIjoiYXV0byIsImhpZ2hsaWdodExpbmtzIjoiYXV0byIsImxpZ2h0aW5nIjoiYXV0byIsImNhY2hlZFBhZ2VzIjoiYXV0byIsInJlbmRlckluYWN0aXZlUGFnZXMiOiJhdXRvIiwicmVuZGVySW5hY3RpdmVQYWdlc09uTW9iaWxlIjoiYXV0byIsInJlbmRlcldoaWxlRmxpcHBpbmciOiJhdXRvIiwicHJlbG9hZFBhZ2VzIjoiYXV0byIsImF1dG9QbGF5RHVyYXRpb24iOiJhdXRvIiwicnRsIjoiYXV0byIsImludGVyYWN0aXZlQ29ybmVycyI6ImF1dG8iLCJtYXhEZXB0aCI6ImF1dG8iLCJzaGVldCI6eyJzdGFydFZlbG9jaXR5IjoiYXV0byIsIndhdmUiOiJhdXRvIiwic2hhcGUiOiJhdXRvIiwid2lkdGhUZXhlbHMiOiJhdXRvIiwiY29sb3IiOiJhdXRvIiwic2lkZSI6ImF1dG8iLCJjb3JuZXJEZXZpYXRpb24iOiJhdXRvIiwiZmxleGliaWxpdHkiOiJhdXRvIiwiZmxleGlibGVDb3JuZXIiOiJhdXRvIiwiYmVuZGluZyI6ImF1dG8iLCJoZWlnaHRUZXhlbHMiOiJhdXRvIn0sImNvdmVyIjp7IndhdmUiOiJhdXRvIiwiY29sb3IiOiJhdXRvIiwic2lkZSI6ImF1dG8iLCJiaW5kZXJUZXh0dXJlIjoiYXV0byIsImRlcHRoIjoiYXV0byIsInBhZGRpbmciOiJhdXRvIiwic3RhcnRWZWxvY2l0eSI6ImF1dG8iLCJmbGV4aWJpbGl0eSI6ImF1dG8iLCJmbGV4aWJsZUNvcm5lciI6ImF1dG8iLCJiZW5kaW5nIjoiYXV0byIsIndpZHRoVGV4ZWxzIjoiYXV0byIsImhlaWdodFRleGVscyI6ImF1dG8iLCJtYXNzIjoiYXV0byIsInNoYXBlIjoiYXV0byJ9LCJwYWdlIjp7IndhdmUiOiJhdXRvIiwiY29sb3IiOiJhdXRvIiwic2lkZSI6ImF1dG8iLCJkZXB0aCI6ImF1dG8iLCJzdGFydFZlbG9jaXR5IjoiYXV0byIsImZsZXhpYmlsaXR5IjoiYXV0byIsImZsZXhpYmxlQ29ybmVyIjoiYXV0byIsImJlbmRpbmciOiJhdXRvIiwid2lkdGhUZXhlbHMiOiJhdXRvIiwiaGVpZ2h0VGV4ZWxzIjoiYXV0byIsIm1hc3MiOiJhdXRvIiwic2hhcGUiOiJhdXRvIn0sImhlaWdodCI6ImF1dG8iLCJ3aWR0aCI6ImF1dG8iLCJncmF2aXR5IjoiYXV0byIsInBhZ2VzRm9yUHJlZGljdGluZyI6ImF1dG8ifSwiY29udHJvbFByb3BzIjp7ImFjdGlvbnMiOnsiY21kVG9jIjp7ImVuYWJsZWQiOiJhdXRvIiwiZW5hYmxlZEluTmFycm93IjoiYXV0byIsImFjdGl2ZSI6ImF1dG8iLCJkZWZhdWx0VGFiIjoiYXV0byJ9LCJjbWRBdXRvUGxheSI6eyJlbmFibGVkIjoiYXV0byIsImVuYWJsZWRJbk5hcnJvdyI6ImF1dG8iLCJhY3RpdmUiOiJhdXRvIn0sImNtZFNhdmUiOnsiZW5hYmxlZCI6ImF1dG8iLCJlbmFibGVkSW5OYXJyb3ciOiJhdXRvIn0sImNtZFByaW50Ijp7ImVuYWJsZWQiOiJhdXRvIiwiZW5hYmxlZEluTmFycm93IjoiYXV0byJ9LCJjbWRTaW5nbGVQYWdlIjp7ImVuYWJsZWQiOiJhdXRvIiwiZW5hYmxlZEluTmFycm93IjoiYXV0byIsImFjdGl2ZSI6ImF1dG8iLCJhY3RpdmVGb3JNb2JpbGUiOiJhdXRvIn0sIndpZFRvb2xiYXIiOnsiZW5hYmxlZCI6ImF1dG8iLCJlbmFibGVkSW5OYXJyb3ciOiJhdXRvIn19fSwiYXV0b1RodW1ibmFpbCI6Imh0dHBzOlwvXC93YWx0ZXJzdG92YWxsLm9ubGluZVwvd3AtY29udGVudFwvdXBsb2Fkc1wvM2QtZmxpcC1ib29rXC9hdXRvLXRodW1ibmFpbHNcLzI5Mjc1LnBuZyIsInBvc3RfbmFtZSI6InUtcy1wYXRlbnQtYW50ZW5uYS1wb3NpdGlvbi10cmFja2luZy1hcHBhcmF0dXMtYW5kLW1ldGhvZHMiLCJwb3N0X3R5cGUiOiIzZC1mbGlwLWJvb2sifX0sInBhZ2VzIjpbXSwiZmlyc3RQYWdlcyI6W119');window.FB3D_CLIENT_LOCALE && FB3D_CLIENT_LOCALE.render && FB3D_CLIENT_LOCALE.render();</script>



<p><em>(use the tool-bar for full-screen and zoom/turn pages, how it all works is on pages 20 &amp; 21)</em></p>
<p>The post <a href="https://walterstovall.online/2023/10/12/my-early-days-grocking-the-wonder-of-microprocessors/">My early days grokking the wonder of microprocessors</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2023/10/12/my-early-days-grocking-the-wonder-of-microprocessors/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SpaceX is churning out a Raptor per day now</title>
		<link>https://walterstovall.online/2022/11/03/spacex-is-churning-out-a-raptor-per-day-now/</link>
					<comments>https://walterstovall.online/2022/11/03/spacex-is-churning-out-a-raptor-per-day-now/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Thu, 03 Nov 2022 10:34:43 +0000</pubDate>
				<category><![CDATA[science]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[space]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=20786</guid>

					<description><![CDATA[<p>There&#8217;s more to a rocket than an engine, but a rocket without an engine is not a rocket. SpaceX is passing an important milestone by producing a fully assembled Raptor engine per day. The Raptor is key to building the SharShip rocket that NASA has contracted with SpaceX to build as part of the Artimis ... <a title="SpaceX is churning out a Raptor per day now" class="read-more" href="https://walterstovall.online/2022/11/03/spacex-is-churning-out-a-raptor-per-day-now/" aria-label="Read more about SpaceX is churning out a Raptor per day now">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2022/11/03/spacex-is-churning-out-a-raptor-per-day-now/">SpaceX is churning out a Raptor per day now</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>There&#8217;s more to a rocket than an engine, but a rocket without an engine is not a rocket. SpaceX is passing an important milestone by producing a fully assembled <a href="https://en.wikipedia.org/wiki/SpaceX_Raptor" title="Raptor engine">Raptor engine</a> per day. The Raptor is key to building the <a href="https://en.wikipedia.org/wiki/SpaceX_Starship" title="SharShip rocket">SharShip rocket</a> that <a href="https://www.yahoo.com/news/spacex-nasa-artemis-lunar-lander-contract-report-184448656.html" title="NASA has contracted with SpaceX">NASA has contracted with SpaceX</a> to build as part of the <a href="https://www.nasa.gov/artemisprogram" title="Artimis scope and status - NASA">Artimis program</a> for future Moon and Mars missions.</p>



<p>It takes a lot of engines to power the StarShip &#8211; 33 engines in the first stage and then six more in the second stage. Producing an engine per day is a big confidence booster when it comes to the risks associated with using the StarShip as planned. For perspective on what that rate means, compare this to just a few years ago when production of a comparable engine was targeted at <span style="text-decoration: underline;">four per year</span>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>In 2015,&nbsp;<a href="https://ir.aerojetrocketdyne.com/news-releases/news-release-details/nasa-and-aerojet-rocketdyne-sign-contract-restart-production-rs">NASA gave Aerojet Rocketdyne</a>&nbsp;a contract worth $1.16 billion to &#8220;restart the production line&#8221; for the RS-25 engine. Again, that was money just to reestablish manufacturing facilities, not actually build the engines. NASA is paying more than $100 million for each of those. With this startup funding, the goal was for Aerojet Rocketdyne to produce four of these engines&nbsp;<em>per year</em>.</p>
<cite><a href="https://arstechnica.com/science/2022/11/spacex-is-now-building-a-raptor-engine-a-day-nasa-says/" title="SpaceX is now building a Raptor engine a day, NASA says">arstechnica.com</a></cite></blockquote>



<h2 class="wp-block-heading">Moving ahead with StarShip testing and development</h2>



<p>Production capabilities aside, the StarShip is far from proven. There&#8217;s yet to be an orbital launch. Although there have been <a href="https://www.space.com/spacex-starship-six-engine-static-fire-ship-24" title="successful static fire tests of the second stage">successful static fire tests of the second stage</a>, there&#8217;s yet to be a full static fire of all 33 engines in the first stage. When that will happen is guesswork, but any day now. <a href="https://teslanorth.com/2022/09/19/spacex-super-heavy-booster-to-test-33-engine-static-fire-in-few-weeks-says-musk/" title="any day now?">A few weeks ago, it was going to happen in a few weeks</a>.</p>



<p>I can&#8217;t wait. Seeing the static fire and orbital launch of StarShip, a rocket big enough to pack cargo to Mars, that will be sweet!</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><mark style="background-color:var(--accent)" class="has-inline-color has-base-3-color">Late breaking news 11/3/2022&#8230;.</mark> SpaceX is pushing for an orbital test <span style="text-decoration: underline;">before the end of 2022</span> and NASA plans are in keeping with that!</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="SpaceX&#039;s NEW Orbital flight timeline announced and NEW Starship prototype testing..." width="900" height="506" src="https://www.youtube.com/embed/V4FeSoSxLGk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Launches tentatively predicted for this year.</figcaption></figure>
<p>The post <a href="https://walterstovall.online/2022/11/03/spacex-is-churning-out-a-raptor-per-day-now/">SpaceX is churning out a Raptor per day now</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2022/11/03/spacex-is-churning-out-a-raptor-per-day-now/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Security and Redundancy of Clustered Virtual Machines</title>
		<link>https://walterstovall.online/2022/11/01/the-security-and-redundancy-of-clustered-virtual-machines/</link>
					<comments>https://walterstovall.online/2022/11/01/the-security-and-redundancy-of-clustered-virtual-machines/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Tue, 01 Nov 2022 15:30:47 +0000</pubDate>
				<category><![CDATA[tech]]></category>
		<category><![CDATA[nas]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=20693</guid>

					<description><![CDATA[<p>This post explores some techniques I&#8217;ve been using to improve the security of some services on my home network and make it easier to recover them in the event of hardware/other disasters. Below, I&#8217;ll describe how and why I&#8217;m moving more and more services onto virtual machines (VM). This is better for security because you ... <a title="The Security and Redundancy of Clustered Virtual Machines" class="read-more" href="https://walterstovall.online/2022/11/01/the-security-and-redundancy-of-clustered-virtual-machines/" aria-label="Read more about The Security and Redundancy of Clustered Virtual Machines">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2022/11/01/the-security-and-redundancy-of-clustered-virtual-machines/">The Security and Redundancy of Clustered Virtual Machines</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This post explores some techniques I&#8217;ve been using to improve the security of some services on my home network and make it easier to recover them in the event of hardware/other disasters. Below, I&#8217;ll describe how and why I&#8217;m moving more and more services onto <a href="https://www.vmware.com/topics/glossary/content/virtual-machine.html" target="_blank" rel="noopener" title="">virtual machines</a> (VM). This is better for security because you can pretty much bet that an attacker that exploits vulnerabilities in a VM probably won&#8217;t be able to do more than compromise the VM itself (not the whole host server). And at least with my <a href="https://www.synology.com/en-global" target="_blank" rel="noopener" title="">Synology</a> server, full &#8220;<a href="https://en.wikipedia.org/wiki/Bare-metal_restore" target="_blank" rel="noopener" title="">bare metal</a>&#8221; backups of the VMs are supported, including the ability to cluster servers so as to make <a href="https://en.wikipedia.org/wiki/Switchover" target="_blank" rel="noopener" title="">switchover</a> or <a href="https://en.wikipedia.org/wiki/Failover" target="_blank" rel="noopener" title="">failover</a> possible with just a few minutes of down time. This can make virtual computers a lot more recoverable and relocatable when compared to actual hardware.</p>



<p>I&#8217;m going detail below some of how I&#8217;m managing this with a couple VMs I have deployed on a cluster of (two) servers. The details of how I do this on a Synology NAS are pretty specific to that hardware &#8211; the concepts are not.</p>



<p>Highlights of this framework include:</p>



<ul class="wp-block-list">
<li>Packaging services in a VM contains the scope of the damage when the &#8220;server&#8221; is compromised.</li>



<li>Clustered hosts make it easy to move VMs to a new host or failover the VM if its host server is down.</li>



<li>Snapshots of VMs can be created instantly as scheduled and then replicated to other hosts in the cluster.</li>



<li>VMs can be exported to an external file system for off-site backup</li>
</ul>



<h2 class="wp-block-heading">How I put the pieces together</h2>



<p>So much for the abstract, see below I&#8217;ll show you how I put this architecture together on my home network, clustering two servers that share two virtual machines.</p>



<p>The purpose of the virtual machines is not hugely relevant but as you&#8217;ll see in the screenshots here, the two virtual computers I have are <em>hutbuddy_websites</em> and <em>Quicken_WindowsServer</em>. The first is a virtual computer that runs a copy of <a href="https://www.wundertech.net/how-to-setup-a-synology-dsm-virtual-machine-vdsm/" target="_blank" rel="noopener" title="">Virtual DSM</a> and hosts a few websites on my network. Websites can be notoriously vulnerable to attack. While I&#8217;m careful with security at those sites, it&#8217;s good to know that if the whole server went down it would still be only those websites and not my whole network. The second VM is something I use for running Quicken on a virtual Windows machine.</p>



<p>Now let&#8217;s start with VMs that exist, but they aren&#8217;t protected like I&#8217;ll outline. On a Synology server and many others, backing up virtual computers can get tricky and some of it gets downright philosophical with certain camps touting that you should <em>just backup the VM from within the VM itself.</em> Yeah that&#8217;s possible but recovering from a disaster requires rebuilding that VM from scratch starting by installing an operating system. It&#8217;s going to take hours with anything complex, and maybe days. I&#8217;m not settling for that because I don&#8217;t have to&#8230;</p>



<h2 class="wp-block-heading">Clustering virtual computers</h2>



<p>The redundancy starts by <a href="https://kb.synology.com/en-us/DSM/help/Virtualization/hosts?version=7" target="_blank" rel="noopener" title="">clustering hosts</a> that each share the same virtual machines. Only one host at a time is designated to be the one that runs a given VM. But with a simple action in the Protection Plan it is possible to move the VM to another host, either for better loading or because a host is down. <em>Note that on a Synology clustering requires a Virtual Machine Manager Pro license.</em></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="595" src="https://walterstovall.online/wp-content/uploads/2022/11/VirtualMachineManagerCluster-1024x595.jpg?x52476" alt="" class="wp-image-20698" srcset="https://walterstovall.online/wp-content/uploads/2022/11/VirtualMachineManagerCluster-1024x595.jpg 1024w, https://walterstovall.online/wp-content/uploads/2022/11/VirtualMachineManagerCluster-300x174.jpg 300w, https://walterstovall.online/wp-content/uploads/2022/11/VirtualMachineManagerCluster-768x446.jpg 768w, https://walterstovall.online/wp-content/uploads/2022/11/VirtualMachineManagerCluster.jpg 1129w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Synology Virtual Machine Manager</figcaption></figure>



<p>The key to redundancy is in the Protection Plan you choose for the VM. By clicking on <em>Protection</em> you get to this console.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="891" height="438" src="https://walterstovall.online/wp-content/uploads/2022/11/VMM_ProtectionPlan.jpg?x52476" alt="" class="wp-image-20700" srcset="https://walterstovall.online/wp-content/uploads/2022/11/VMM_ProtectionPlan.jpg 891w, https://walterstovall.online/wp-content/uploads/2022/11/VMM_ProtectionPlan-300x147.jpg 300w, https://walterstovall.online/wp-content/uploads/2022/11/VMM_ProtectionPlan-768x378.jpg 768w" sizes="auto, (max-width: 891px) 100vw, 891px" /><figcaption class="wp-element-caption">Protection Plan Console</figcaption></figure>



<p>In the <a href="https://kb.synology.com/en-us/DSM/help/Virtualization/data_protection?version=7" target="_blank" rel="noopener" title="">protection plan</a> you&#8217;ll schedule <em>snapshots</em>. A snapshot is a complete copy of the state of the virtual computer. Snapshots can be taken while the VM runs as <a href="https://kb.synology.com/en-us/DSM/tutorial/What_is_file_system_consistent_snapshot" target="_blank" rel="noopener" title="">filesystem-consistent snapshots</a> at a point in time. Then you define a Retention Policy that says exactly when you want to release the space for old snapshots.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="673" height="426" src="https://walterstovall.online/wp-content/uploads/2022/11/VMMSnapshotRetentionPolicy.jpg?x52476" alt="" class="wp-image-20701" srcset="https://walterstovall.online/wp-content/uploads/2022/11/VMMSnapshotRetentionPolicy.jpg 673w, https://walterstovall.online/wp-content/uploads/2022/11/VMMSnapshotRetentionPolicy-300x190.jpg 300w" sizes="auto, (max-width: 673px) 100vw, 673px" /><figcaption class="wp-element-caption">VMM Retention Policy says when to delete old snapshots</figcaption></figure>



<p>In the example policy above, the system retains snapshots for the last week and then keeps one snapshot per week for the last month.</p>



<p>Now that sounds like a lot of diskspace. My websites VM takes up about 250GB and I&#8217;m storing 15 or so copies of that?? Not really. <a href="https://walterstovall.online/2021/09/16/a-new-level-of-redundancy-btrfs-and-snapshot-replication-under-the-hood/" title="">Snapshots take advantage of the BTRFS file system</a> and only store deltas. What it does mean is (unless you manually delete snapshots which you can do) if you delete a bunch of stuff it doesn&#8217;t go away immediately. That&#8217;s usually a good thing!</p>



<p>The outcome of clustering hosts like this is that if a host goes down, I can failover its VMs to the other host in just a few minutes. And if the VM crashes/other then I can restore from a snapshot made at various times that day, or less frequently for up to a month.</p>



<h2 class="wp-block-heading">What&#8217;s missing?</h2>



<p>OK so now we have two host servers that can each separately run the very same virtual machines. Not just sort of the same, but the same all the way down to the full content of the file system, the MAC address, everything. If a server goes down then I can almost instantly boot the VMs it hosted and they&#8217;re completely back in operation.</p>



<p>The only remaining problem is <em>what if I lose both servers?</em>?  The two servers are in physical proximity. Theft, fire, or other might mean that both servers go down perhaps permanently. Obviously I won&#8217;t recover from that in just a few minutes, but the real problem is the fact that the servers were replicating snapshots to each other so now <strong>ALL the snapshots are gone!</strong></p>



<p>One solution to this problem would be to periodically export the VM to a file. This is NOT a &#8220;snapshot&#8221; with only deltas, it&#8217;s a great big file that&#8217;s the whole state of the VM and everything in its internal file system.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="640" height="295" src="https://walterstovall.online/wp-content/uploads/2022/11/VMMExport.jpg?x52476" alt="" class="wp-image-20703" srcset="https://walterstovall.online/wp-content/uploads/2022/11/VMMExport.jpg 640w, https://walterstovall.online/wp-content/uploads/2022/11/VMMExport-300x138.jpg 300w" sizes="auto, (max-width: 640px) 100vw, 640px" /><figcaption class="wp-element-caption">Export IF you shutdown the VM first</figcaption></figure>



<p>The problems here are two-fold. For one thing, the export of a big VM might take several hours and the whole time it exports, <span style="text-decoration: underline;">you have to have your VM shutdown/offline</span>. The other problem is that this is a manual action! I&#8217;m loath to have manual procedures that I can automate. But can I?</p>



<p>At first it seems like we&#8217;re stuck here &#8211; and that&#8217;s indeed where I stayed for months. But ultimately I got some help from a friend at reddit and found <a href="https://www.synology-forum.de/threads/virtual-machine-manager-vms-sichern.91952/page-3#post-944113" target="_blank" rel="noopener" title="">this German website</a> that details a solution that includes using an internal utility we find in DSM (good thing google translates).</p>



<p><em>(I&#8217;m fine with using this even though not publically documented &#8211; be your own judge)</em></p>



<p>See my SSH session below (run it with root privilege i.e. <em>sudo -i</em>)</p>



<pre class="wp-block-preformatted">/volume1/@appstore/Virtualization/bin/vmm_backup_ova --help

Usage: /volume1/@appstore/Virtualization/bin/vmm_backup_ova [--dst] [--batch] [--host] [--guests] [--retent] [--retry]
        backup VM to shared folder on VMM

Options:
        --default       use default options to backup
        --dst           shared folder path for storing backup OVA
        --batch         the number of VMs exporting at a time (default: 5)
        --host|--guests mutually exclusive options
                        '--host' only backup VMs which repository is on the specified host (default: all)
                        '--guests' only backup specified VMs (default: not specified, use | for seperator if there are multiple targets)
        --retent        the number of backups for retention (default: 3)
        --retry         the number of times for backup retrying (default: 3)

Examples:
        Run backup script by default
                ./vmm_backup_ova --default
        Backup all guests which repository is on the host and store OVAs in certain shared folder
                ./vmm_backup_ova --dst=&lt;share-name&gt; --host="&lt;host-name&gt;"
        Backup all guests which repository is on the host and limit the number of VMs exporting at a time to avoid affecting performance
                ./vmm_backup_ova --batch=2 --host="&lt;host-name&gt;"
        Backup certain guests and store the last two OVAs per VM
                ./vmm_backup_ova --guests="&lt;guest_name_1&gt;|&lt;guest_name_2&gt;" --retent=2
root@HomeNAS2602:~#
</pre>



<p>The vmm_backup_ova utility is the cat&#8217;s meow here. I launch the program with a ssh script that reads as follows:</p>



<pre class="wp-block-preformatted"># clone/export VMs on this host for disaster recovery
#!/bin/bash
set -e
/volume1/@appstore/Virtualization/bin/vmm_backup_ova --dst=VMBackups --host="HomeNAS2602" --retent=1</pre>



<p>In this case I&#8217;m telling vmm_backup_ova to export every VM running on that host and store the export in a shared folder called <em>VMBackups</em> and retain only one backup. <em>A key advantage of this utility is that we do NOT have to shutdown the VM!</em> Instead, vmm_backup_ova starts by making a temporary clone of the running VM, which happens in nearly an instant. Then it proceeds to export that clone (which is never run) <span style="text-decoration: underline;">while the real VM continues to run</span>. The export of a large VM might take several hours, but it runs in the background while everything else continues to function and then the clone VM is automatically deleted.</p>



<p><em>Tip: Avoid spaces in your virtual computer names. My experience is the utility creates destination directories with the wrong names and then can&#8217;t populate them. See my use of underbars instead.</em></p>



<p>In practice I run a script like that on each of the two hosts. It&#8217;s nice that in the GUI of Virtual Machine Manager I can see and monitor the snapshot/export process even though I didn&#8217;t initiate it there. And although each NAS exports to its own file system, the VMBackups shared folder is replicated to the other host too via <a href="https://kb.synology.com/en-global/DSM/help/SynologyDrive/drive_sharesync?version=7" target="_blank" rel="noopener" title="">ShareSync</a>, and the <a href="https://www.synology.com/en-us/dsm/feature/hyper_backup" target="_blank" rel="noopener" title="">Hyper Backup</a> program is used to make off-site copies of VMBackups. Finally, the VM backups share itself gets <a href="https://www.synology.com/en-us/dsm/feature/snapshot_replication" target="_blank" rel="noopener" title="">snapshot retaining content for up to a month</a> (I snapshot nearly everything to protect it from ransomware if nothing else).</p>



<p>I&#8217;m currently exporting once per month as scheduled in the Task Scheduler. So if I lost BOTH hosts then I can still recover the VM from the latest export (with some hardware of course), then restore VM files from within the VM itself, as I&#8217;ll typically have made more recent file backups and not have to revert all the way back to the last export once I&#8217;m all done.</p>
<p>The post <a href="https://walterstovall.online/2022/11/01/the-security-and-redundancy-of-clustered-virtual-machines/">The Security and Redundancy of Clustered Virtual Machines</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2022/11/01/the-security-and-redundancy-of-clustered-virtual-machines/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to limit the possible damage done by docker container malware</title>
		<link>https://walterstovall.online/2022/09/10/how-to-limit-the-possible-damage-done-by-docker-container-malware/</link>
					<comments>https://walterstovall.online/2022/09/10/how-to-limit-the-possible-damage-done-by-docker-container-malware/#respond</comments>
		
		<dc:creator><![CDATA[Walter Stovall]]></dc:creator>
		<pubDate>Sat, 10 Sep 2022 12:42:36 +0000</pubDate>
				<category><![CDATA[tech]]></category>
		<category><![CDATA[nas]]></category>
		<guid isPermaLink="false">https://walterstovall.online/?p=19352</guid>

					<description><![CDATA[<p>For all the docker users out there, I thought I&#8217;d share a couple points about managing docker containers on your home server. These are important security issues that get commonly missed. The simple examples you see on the internet for installing docker containers won&#8217;t usually mention these things. But they might save your whole system ... <a title="How to limit the possible damage done by docker container malware" class="read-more" href="https://walterstovall.online/2022/09/10/how-to-limit-the-possible-damage-done-by-docker-container-malware/" aria-label="Read more about How to limit the possible damage done by docker container malware">Read more</a></p>
<p>The post <a href="https://walterstovall.online/2022/09/10/how-to-limit-the-possible-damage-done-by-docker-container-malware/">How to limit the possible damage done by docker container malware</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>For all the <a href="https://docs.docker.com/get-started/overview/">docker</a> users out there, I thought I&#8217;d share a couple points about managing <a href="https://www.docker.com/resources/what-container/">docker containers</a> on your <a href="https://en.wikipedia.org/wiki/Home_server">home server</a>. These are important security issues that get commonly missed. The simple examples you see on the internet for installing docker containers won&#8217;t usually mention these things. But they might save your whole system from  being shutdown by malware/ransomware.</p>



<p>What are these protections and why are they necessary? First let me cover a little background on how docker containers work. The code in the container executes as part of the <a href="https://docs.docker.com/engine/">docker engine</a>. The docker engine by necessity, executes with <a href="https://www.howtogeek.com/737563/what-is-root-on-linux/">root privilege</a> and can therefore read or write any data in the file system whatsoever. To cause damage, malware in the container need only successfully submit a request to delete critical system files etc.</p>



<p>In addition to damaging the file system, containers can also carry out network attacks on other containers on your server. Containers normally run in the default <a href="https://docs.docker.com/network/bridge/">bridge network</a>. Being on the same <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-subnet/">subnet</a>, the docker engine makes the containers visible to each other by name. So containers can discover other containers and get their IP address thru DNS. The requests they send to each other may be malicious and won&#8217;t be blocked by any firewall since they occur within the same docker subnet (which is not a real network &#8211; it&#8217;s a virtual LAN in the engine).</p>



<p>I recently went digging in this area when I got interested in installing the <a href="https://js.wiki/">wiki.js</a> container on my system to hold a <a href="https://en.wikipedia.org/wiki/Wiki">wiki</a> site. Wiki.js is a fully fledged web site/web publishing framework. Its JavaScript architecture and interfaces make it particularly susceptible to injection attacks. There&#8217;s also a history of quite a few bugs, and I&#8217;m not sure the codebase is clean of malware or poor security practices. That might be a reason to have second thoughts about using it all, but IMO that&#8217;s a little drastic if things are managed well.</p>



<p><span style="text-decoration: underline;">But these concerns did spur me to learn about some controls that can be put in place, and how to use them</span>. What I&#8217;m looking for here is to see that the <a href="https://www.ibm.com/topics/attack-surface">attack surface</a> within the docker engine, is limited to the wiki.js website itself &#8211; not my whole server. This means that an attacker might bring down wiki.js and might gain access to any information that&#8217;s been published there. But potentially numerous other services like my password manager, websites/sql databases, financial software, online movies, etc. remain unaffected.</p>



<h2 class="wp-block-heading">Isolate Docker Containers</h2>



<p>Docker provides a couple of ways to manage container security. You just have to make a point to use them when you have reason to be concerned about what a container might do (like uh&#8230;all the time I should have been doing this all along).</p>



<ul class="wp-block-list"><li><a href="https://docs.docker.com/engine/reference/commandline/network_create/">Give the container its own network</a>. Most people install containers on the default <em>bridge</em> network by simply not specifying otherwise. So usually examples you find on the web try to keep things simple and leave this out. Alternatively, you can isolate your container on its on network and this means it has its own subnet. Now, even if your container magically knew the IP address of another container, it would not be able to send it anything. The docker runtime would not route the request. This is why docker has this facility and why you should use it.</li><li><a href="https://github.com/linuxserver/docker-documentation/blob/master/general/understanding-puid-and-pgid.md">Limit the logical file-system privilege of the container</a>. As mentioned, the docker runtime runs with root privilege. That would seem to drive a nail into the coffin, for any goal of seeing your container have limited privileges as it executes file system code. But docker has a facility to address just this concern, that being the <em>PUID/PGID</em> arguments that tell docker to execute container requests <em>as-though</em> the request were executed by a specific user. So barring some kind of zero-day vulnerability in the runtime, this goes a long way to limiting the damage done by ill-formed or ill-intent code. Again, you don&#8217;t usually see these arguments getting used. They won&#8217;t protect you unless you use them.</li></ul>



<h2 class="wp-block-heading">How I went about container isolation by example</h2>



<p>The details of applying the above docker facilities are system specific when you look at the details. But similar steps will apply regardless. In a broad sense, the problem is that of creating a dedicated bridge network for the container and then use that. Then also limit the file system privileges.</p>



<p>These are the specific steps I took to deploy the wiki.js container on my <a href="https://www.synology.com/en-global/company/news/article/DS1520Plus_PR">DS-1520+ NAS</a>. There are lots of ways of doing the equivalent things, this is just a by-example for the steps I took based on what&#8217;s easy and familiar to me.</p>



<p>The first thing I&#8217;m going to do is create a network that I&#8217;ll call &#8220;wiki&#8221; where I&#8217;ll isolate the wiki.js container. I do that by running portainer, select my host and go to Networks and click on Add. Fill in the name of the network as &#8220;wiki&#8221;. Confirm the Driver is &#8220;bridge&#8221; and accept defaults on everything else and save this as a new network.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="625" height="401" src="https://walterstovall.online/wp-content/uploads/2022/09/image.png?x52476" alt="" class="wp-image-19354" srcset="https://walterstovall.online/wp-content/uploads/2022/09/image.png 625w, https://walterstovall.online/wp-content/uploads/2022/09/image-300x192.png 300w" sizes="auto, (max-width: 625px) 100vw, 625px" /><figcaption>portainer screenshot add network</figcaption></figure>



<p>Now you can see your new network that&#8217;s setup on its own subnet.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="718" height="372" src="https://walterstovall.online/wp-content/uploads/2022/09/image-1.png?x52476" alt="" class="wp-image-19355" srcset="https://walterstovall.online/wp-content/uploads/2022/09/image-1.png 718w, https://walterstovall.online/wp-content/uploads/2022/09/image-1-300x155.png 300w" sizes="auto, (max-width: 718px) 100vw, 718px" /><figcaption>portainer screenshot network list</figcaption></figure>



<p>With the network ready, I&#8217;ll now setup a user account for limiting the container&#8217;s privileges.</p>



<p>Start by creating a system user. On my system I just went to the Control Panel and setup a new user I call &#8220;docker_wikijs&#8221;. This user has file system privileges where the only directory it has any access to whatsoever, is the shared folder where the wiki.js maintains all its settings and data.</p>



<p>Getting the PUID/PGID takes executing the linux <em>id</em> command. If you&#8217;re comfortable with using SSH and you have SSH enabled on your server etc. then you can open a SSH prompt and get the output as shown by this example where I execute &#8220;id docker_wikijs&#8221;.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="634" height="248" src="https://walterstovall.online/wp-content/uploads/2022/09/image-2.png?x52476" alt="" class="wp-image-19358" srcset="https://walterstovall.online/wp-content/uploads/2022/09/image-2.png 634w, https://walterstovall.online/wp-content/uploads/2022/09/image-2-300x117.png 300w" sizes="auto, (max-width: 634px) 100vw, 634px" /><figcaption>SSH Terminal get PUID/PGID values</figcaption></figure>



<p>So what if you&#8217;re NOT so comfortable with SSH and you don&#8217;t have it setup? Well on a Synology don&#8217;t despair. You can actually execute the <em>id</em> command by setting up a task to do that in the Control Panel. The output will come to you as email. <a href="https://mariushosting.com/synology-find-uid-userid-and-gid-groupid-in-5-seconds/">See this easy guide on doing that</a>. (by the way you can use this same trick to execute any task such as <a href="https://docs.docker.com/engine/reference/commandline/run/">docker run</a> as root, just know that you need to take proper care doing so)</p>



<p>So take the <em>uid</em> and <em>gid</em> values that come from the <em>id</em> command and that&#8217;s all you need for making PUID and PGID arguments for the docker run command.</p>



<p>Having prepared the shared folders that wiki.js specifically wants, now I&#8217;m ready to execute docker run to install the container. See the following docker run command with highlighted arguments that isolate the container.</p>



<p><kbd>docker run -d --name=wikijs \<br><mark style="background-color:#802f2f" class="has-inline-color has-base-3-color">--network=wiki \<br>-e PUID=</mark><mark style="background-color:#ff4545" class="has-inline-color has-base-3-color">&lt;uid value&gt;</mark><mark style="background-color:#802f2f" class="has-inline-color has-base-3-color"> \<br>-e PGID=</mark><mark style="background-color:#fa3d3d" class="has-inline-color has-base-3-color">&lt;gid value&gt;</mark><mark style="background-color:#802f2f" class="has-inline-color has-base-3-color"> \</mark><br>-p 3540:3000 \<br>-e TZ=America/New_York \<br>-v /volume1/docker/wikijs/config:/config \<br>-v /volume1/docker/wikijs/data:/data \<br>--restart always \<br>ghcr.io/linuxserver/wikij</kbd>s</p>



<p>The <em>network</em> argument naturally puts the container on that bridge instead of the default. The PUID and PGID arguments look just like simple environment variables, but the docker runtime picks up on these and quietly applies those privileges.</p>



<p><mark style="background-color:var(--base)" class="has-inline-color">Like anything though, test it out. For example reduce the user to read-only privilege and observe the wiki website failing to save files when you tell it to.</mark></p>



<p>I execute the above <em>docker run</em> and then go to portainer and find wikijs installed as requested. <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a href="https://walterstovall.online/2022/09/10/how-to-limit-the-possible-damage-done-by-docker-container-malware/">How to limit the possible damage done by docker container malware</a> appeared first on <a href="https://walterstovall.online">Walter&#039;s Little World</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://walterstovall.online/2022/09/10/how-to-limit-the-possible-damage-done-by-docker-container-malware/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Object Caching 0/209 objects using APC
Page Caching using Disk: Enhanced 
Lazy Loading (feed)
Minified using Disk
Database Caching using APC

Served from: walterstovall.online @ 2026-05-05 08:24:44 by W3 Total Cache
-->