The Big Red Car Returns

The lasts five weeks have been a difficult time in the life of your Big Red Car. We had the Frozen Week wherein the temps went to 8F at night compounded by a loss of electricity and water. It was hard. I toughed it out and found out I am not really as tough as I once was.

Texas is not built for this kind of northern nonsense. It also snowed and we do not have snow plows.

On the heels of that I contracted some viral malady — the docs ruled out COVID (had it in July 2020), Flu A/B, Nile Fever, Ebola, and other maladies. Symptoms were consistent with COVID, but I already had that and I had the antibodies to prove it. Never figured it out.

Continue reading


Big Red Car — Fast Site

About a month and a half ago, I got hacked. That made me focus on the Big Red Car website. It was very frustrating. Lots of time on the chat with support from a bunch of different places — talking to you, AWS.

I changed a lot of things — hosting, security, analytics, CDN, cache, backup, image optimization, fonts, SSL, subscription, SEO — had to. I had let the site get a little overgrown and spindly.

So, I have been doing some work on it. Have three or four more things left to do, but today, I got the speed where it needs to be.

The site is loading in less than a second. For a site this large, that is a superb speed.

I use GTMetrix to measure performance and keep my results. It has taken at least a month, but to see an A-96% on PageSpeed and an A-90% on YSlow with a 0.8 second load time, is pretty damn good.

The smartest thing I did was to turn loose WPSpeedGuru in the person of Alexei Kutsko.

I cannot believe how much better the site does on search rankings. I never really configure it for that, but the speed makes all the difference.

I hope you enjoy it.

If there is some change you’d like me to consider. Drop me a comment. It feels like you do when you finish tying a big, fat, beautiful Monkeys Paw.





BRC — The Website Troubles

About a month ago, the BRC website was hacked. I first noticed it because my Amazon Web Services “instance” (the hosting arrangement) kept cutting out for excess CPU usage. At the time, I knew nothing about it and hadn’t put any analytics on the site.

To fix this, I had to dig into AWS, close down the instance, and restart the instance. When the hack was in play, load times on my site were measured in weeks. Still, I had great traffic for most of the day.

The AWS solution was to buy more CPU capacity, but something didn’t look right. I wiggled into the analytics and found out it was happening between 1-4 AM and that the CPU usage was rocketing right up. Still, AWS said, “Uhh, buy more CPU and it will handle the problem.”

I hired a guru through UpWork to look at it and he said, “Every night between 1-4 AM, bad actors are snatching your website and using it to host nefarious things.”

We made some changes and it got better by pieces. Still, AWS said, “Buy some more CPU, you rusty bucket of bolts.”

It took about three weeks and the consultant re-worked a bunch of things — server side things. Then, he noted that my site was SLOW.

I used GTMetrix to monitor my site and agreed. If you know GTMetrix, you will gag when I tell you that my PageSpeed Score was F- and my YSlow Score was also F-. My site was taking about 16 seconds to load.

Continue reading