A great technical/programmer interview question? (Also: how to update all your Route53 registry contacts)

This one will separate the nerds from the normals:

Assuming there is no one you could delegate the data entry task to, how do you feel about the fact you just spent 60 minutes learning and writing a script to use some APIs to save you from 30 minutes of repetitive manual data entry on a remote system?

A) Darn, I guess should have just entered the data

B) I’d have cheerfully spent 2 hours coding to avoid a half hour of stupid monkey work

The source of this was discovering that using the AWS Route53 dashboard to update the street address of the registry contacts for 100 domains meant manually editing 300 records…

No sir, no “populate from a central master record” and no sir, no “check all the domains you want to edit with this new data”.

Knowing full well that learning and using the API would take longer than the actual probably-never-needed-again task, I none-the-less created this gist:

a quick ruby script to update route53 contacts for all registered domains, since R53 dashboard STILL offers no way to do that

Cloudflare free DNS is VERY slow to update nameservers

Hours after switching the nameservers for a website from Namecheap to Amazon Route53 the new nameservers were not being used on my Mac, even after I purged the name servers on my Mac using the terminal command below then rebooting:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

I then accessed the site from another Mac on my wifi network and it worked fine.

I then tried accessing the site from my primary Mac using a VPN, and again it worked fine.

The difference, it turns out, was I was using Cloudflares DNS on my primary Mac. As soon as I switched my Mac’s DNS to the new site was accessible.


Side note: One BIG disadvantage of buying domains on Namecheap is there is no way to set the name servers during the purchase… the domain will start off using Namecheap’s DNS, and the TTL is pre-set to 30 minutes. So if you plan to host your DNS elsewhere, when you specify custom nameservers it will take a minimum of 30 minutes delay before your new nameserver takes effect. (And as noted above if you are using Cloudflare for your local machine’s network DNS, it can be hours.)


A 12-year-old Firefox bug that will convince you to switch to Chrome

Reported 12 years ago, closed, then finally re-opened (but not fixed) during the past year

Firefox periodically mangles hidden form data… you know, that stuff that makes your apps work.

On the right, Chrome showing the “_method” field’s correct value, “patch”.

On the left, Firefox’s mangled version of the exact same web page: “5f4ledBRFGRYSUpaeJ29y-J0SX6KRSzbr1zSjVvgy1fhGmQAXXzsjLxdswyBtqopEnO6pQAaJTEFUJKXDVyisg”


Getting Rails 6 ActionMailer to send emails via an explicit ActiveJob (so you can specify Job behavior)

By default, if your Rails 6 app sends emails via ActionMailer it is “nicely integrated with Active Job” (in the words of the Rails guide) so whatever queuing backend you have setup on Rails will get used. So it “just works” as long as you’re content with the default queue.

And that default mailing behavior is probably a terrible idea. Because your default queue backend (like Sidekiq) probably has automatic retries. And automatic retries are evil when it comes to email… if an email is sent, yet the confirmation handshake times out or has another error, then the email gets re-sent. (I’ve posted elsewhere about my inbox flooded with such behavior from big guys (Microsoft) and small guys (JibJab) alike.)

The right way to do email in Rails is

  1. Configure ActionMailer to send your email jobs via an explicit ActiveJob that you can custom configure, not the implicit (magic) default connection to ActiveJob.
  2. For your Mailers, use a persistent queuing backend (that doesn’t forget all your pending jobs in case of a server restart), lets you disable retries, and one that lets you inspect failed jobs, debug the issue, retry them. (No entry-level Sidekiq plans, in other words, where failed jobs cannot be inspected or retried once you correct the issue.)
  3. Configure your mailer Job.

For emails, I like DelayedJob, even if using Sidekiq for all the magic Hotwire and Stimulus goodies. Keeps them right in a database table.

Step 1. The hardest part of the whole thing was figuring out the (completely undocumented in the Rails guides) method to configure a ActionMailer to use an explicit ActiveJob using “delivery_job = JobName” It’s very simple:

# sample test mailer, lets you purposefully raise an exception for testing how queue handles it
class TestMailer < ApplicationMailer

  self.delivery_job = TestMailerJob # this is the secret step 1

  def test_poobah(arg)

    raise "error_test_1 in TestMailer" if arg == "error1"
    to = "sometestemail@yourdomain.com"
    to: to,
    from: "your_sending_address@yourdomain.com",
    subject: "a test email (arg #{arg}) at #{Time.now}"


Step 2. I install gem 'delayed_job_active_record' then add an initializer that establishes a queue for mailing that does not retry. Except for retries pretty much as documented on the gem’s homepage.


# config/initializers/delayed_job_config.rb

Delayed::Worker.destroy_failed_jobs = false
Delayed::Worker.max_attempts = 1
Delayed::Worker.read_ahead = 1
Delayed::Worker.default_queue_name = 'default_dj'
Delayed::Worker.sleep_delay = 10

Step 3.

# sample test job that lets you see how job-level errors are surfaces
class TestMailerJob &lt; ApplicationJob
  self.queue_adapter = :delayed_job # specify which queue backend to use
  # sidekiq_options retry: false # IF using sidekiq disable retries HERE
  queue_as :default_dj # spcify the queue name (if you have more than one)
  def perform(arg)
    raise "error_test_2 in TestMailerJob" if arg == "error2"

To bypass the queue, send via:

TestMailerJob.perform_now("a test argument")

To send via the queue:

TestMailerJob.perform_later("a test argument")

Stopping Logentries High Response Time alerts that are due to normal Actioncable behavior

Upon pushing a new Rails 6.1 app to Heroku, we promptly started getting “High Response Time” alerts from Logentries:

... heroku router - - .... method=GET path="/cable" .... request_id=xxxxx ... connect=4ms service=61011ms ...

We noticed they were all for the url /cable which makes sense, since Actioncable keeps the channel open, so the service time keeps increasing.

I saw some online posts suggesting to disable the High Response Time alert, but a better way IMO is ot keep the alert, but omit that particular route.

In your Logentries dashboard, simply edit Tags&Alert > High Response Time > Pattern

from: (service > 5000)

to: (service > 5000) AND NOT (path cable)

The quickest & easiest way to develop AWS Lambda functions that use Ruby gems

Although there are some nifty frameworks for building and managing AWS Lambda functions using Ruby, unless the project is quite complex I like a simple local-to-AWS development flow, without any framework magic obscuring what’s actually happening in my code.

This is probably the simplest way to get up and running developing locally for AWS Lamda functions that need to use common gems such as “rest-client”. It is modeled after: https://www.stevenringo.com/ruby-in-aws-lambda-with-postgresql-nokogiri/ but simplified.

You need an AWS account, and Docker, period. (Don’t even need the AWS CLI command line utility.)

In browser:

1. Create a default ‘hello world’ lambda function in your aws lambda console

It sets up intelligent default for roles etc.

I like to set concurrency to 1 while developing.

In the Mac terminal:

2. Create  a docker folder with a dockerfile that uses an aws lambda image

The folder is just to hold the docker file you use when you need to build or rebuilt the Docker image you use for local AWS Lambda development.
$ mkdir aws-lambda-docker
$ cd aws-lambda-docker

If using postgres (like the article above this was based on) it’ll need extra stuff, this example below is just the simplest  Dockerfile, to let you use ruby gems.

# be sure the ruby version is same as what aws lambda currently offers 
# Dockerfile 
FROM lambci/lambda:build-ruby2.7 
RUN gem update bundler 
CMD "/bin/bash"

3. Create tagged docker container, your own local “AWS Lambda in a Box” that you can re-use for all your Ruby-with-gems Lambda projects:

I’m building a container for aws lambda ruby v27 so I use that for the tag:

$ docker build -t awsruby27 .

4. Make a Project directory

$ mkdir awsstuff 
$ cd awsstuff

5. Create your lambda_function.rb code

We’ll use two gems, one that needs to be compiled with native extensions (nokogiri) and one that does not (rest-client).

# lambda_function.rb
require 'json'
require 'rest-client'
require 'nokogiri'

def lambda_handler(event:, context:)
    url = "https://api.myip.com"
    puts "use rest-client ruby gem"
    html = "<html><title>The Title</title><body>The Body</body></html>"
    parsed_data = Nokogiri::HTML.parse(html)
    puts parsed_data.title
    rr = RestClient::Request.execute :method => :get, 
        :url => url, 
        :ssl_version => 'SSLv23'
    result = "Title check: #{parsed_data.title} and remote api returned #{rr.body}"

    { statusCode: rr.code, body: result }

6. Create your Gemfile

# Gemfile
source "https://rubygems.org"
gem "rest-client"
gem "nokogiri"

7. Inside the project folder, run the Lambda ruby container:

$ docker run --rm -it -v $PWD:/var/task -w /var/task awsruby27

Inside the docker container:

8. Bundle your project

Only needed when you create or  modify the Gemfile:

bash-4.2# bundle install --path vendor/bundle --clean

9. Test your handler code

Via Ruby’s -e option to execute a string containing the ruby code to require & invoke a function:

bash-4.2# ruby -e "require 'lambda_function'; puts lambda_handler(event: nil, context: nil)"
# output:
use rest-client ruby gem 
The Title {:statusCode=>200, :body=>"Title check: The Title and remote api returned {\"ip\":\"xxx.xxx.xxx.xxx\",\"country\":\"United States\",\"cc\":\"US\"}"}

Or interactively, via irb:

bash-4.2# irb
> require 'lambda_function'
> lambda_handler(event: nil, context: nil)
# output: 
use rest client ruby gem 
The Title {:statusCode=>200, :body=>"Title check: The Title and remote api returned {\"ip\":\"xxx.xxx.xxx.xxx\",\"country\":\"United States\",\"cc\":\"US\"}"} 
> exit
bash-4.2# exit

10. Back in Terminal, package the zip file

$ rm -f deploy.zip ; zip -q -r deploy.zip .

In AWS console:

11. Upload your working zip file deployment to the lambda function

In aws console, on the existing lambda function, on CODE tab, use UPLOAD FROM button on right and upload zip fle

12. Test it on Lambda

Now you have an AWS Lambda function in Ruby, which can use arbitrary Ruby gems, even native-extensions gems like nokogiri (which works on AWS since you compiled your Ruby bundle inside a docker container based on the AWS ‘build’ image for Ruby).

A few notes about deployment size…

The bundle’s deployment size increases significantly if you add certain native-extension gems such as nokogiri, and if the deployment package exceeds some limit you will not be able to view the code on the AWS Lambda console and will see this message instead:

The deployment package of your Lambda function "test2" is too large to enable inline code editing. However, you can still invoke your function.

The code still runs on Lambda, only the code view/edit is disabled when the package is too large.

For your NEXT projects, simply start at step 4.

You’ve already created your Lambda-in-a-Box docker container so simply create a project folder, write your lambda_function.rb and Gemfile, then use your already-built docker container to run and text your code.

Why I’ll skip the J&J vaccine and wait for one of the “better” ones

Whenever my turns comes up, this is why I will decline the J&J vaccine if is the “first” one offered to me, despite the catchy (but IMO provably false) government PR slogan “the best vaccine is the one they offer you.”

I think there’s a lot of sloppy, quota-driven recommendations around which vaccine to get, hence this post. Uncle Sam wants as many vaccinated as soon as possible and IMO obfuscates or ignores some important information to that end.

Note – this post explicitly assumes one has a reasonable expectation of being able to get the Pfizer or Moderna within a week or two after declining the J&J. I believe the J&J is “better than nothing” if there is no other choice within a moderate timeframe… for example if I lived in a rural area without refrigeration.

Point 1: Data available today suggests to me that you are somewhat more likely to infect someone else  (with possibly disastrous results) if you get the J&J vaccine instead of Pfizer or Moderna.

Remember, your own chance of infection if you get J&J is roughly 5 times higher than if you get Pfizer or Moderna. (About 25% vs 5% given the relative efficacies of 75% vs 95%.)

The “argument” used by J&J apologists (and bureaucrats who want to hit a vaccination goal) is that:

  • J&J is admittedly not as effective at preventing infection,
  • but if you get COVID the J&J is just as good at keeping you from developing a serious or fatal case of COVID.

So, good for you.

But let’s not ignore the inconvenient fact that if you yourself happen to get infected experiencing a mild symptom, there is no CDC conclusion to suggest that anyone not-yet-vaccinated whom you happen to infect will have milder symptoms as a  result of your vaccination. And, unfortunately, no CDC conclusion as yet that you are less likely to infect someone else if you are infected-but-vaccinated.

So if you care about the unvaccinated people around you (nearly) as much as yourself, it seems to me you want to insist upon a vaccine where your chance of infection is about 1/5 as high as with the less-effective J&J.

Point 2: Misleading counter-claim: If you “wait for one of the good ones” to become available, you’re putting yourself at increased risk of catching COVID, so get J&J if thats what they offer you.

This lazy argument presumes “wait” means “wait a long time”.

The increased risk while waiting obviously DEPENDS upon how long the “wait” is… what if it’s an hour? or tomorrow? or one week? or six months?

Even without doing math you know that if “the wait” is 1 minute you have near-zero incremental risk during that minute, but if “the wait” is a year the risk of waiting is a lot higher, right?

But lets do some easy math… since Uncle Sam doesn’t publish the number let’s make a first-approximation guesstimate of someone’s “incremental per-day risk of infection” then double-check our first guesstimate against the CDC daily infection rate.

Estimates so far seem to be that about 30% of the USA have thus far been infected.  Let’s call the time period 365 days. So as a very rough first approximation the risk/day is on the order of 0.30 / 365 = 0.0008, now let’s arbitrarily triple that risk to 0.0025 to lean a little further into the risk. So by that (very simplistic) math each day one “waits” their added risk of infection is in the ballpark of 1/4 of 1%. (I expect it’s actually much smaller because if you have avoided infection so far you’re likely being pretty careful and hopefully will remain so, plus rates of community spread seem to be leveling or even dropping.)

In fact, as a second estimate and sanity check, we get a far smaller number for the average daily infection rate (.00015) if you divide current daily COVID cases by the US population, so for two weeks that comes out to 0.002, e.g. roughly 1/5 of 1% additional risk if you wait 2 extra weeks for a better vaccine.

My own conclusion:

Rather than accepting J&J today, waiting a week or 2 (possibly even 3) for Pfizer or Moderna to become available to me is safer for the still-unvaccinated people around me since it decreases the odds of my catching a mild case and then watching someone I infect experience a critical case.

A few other salient arguments re: J&J:

A – We don’t yet know which (if any) vaccines  do/don’t prevent transmission to others if you are infected, so it is theoretically possible J&J “might” do a better job of preventing infecting people around you if you have an active (but mild) infected.

We don’t know. So it seems foolish to me to ignore the major points 1 & 2 above on the mere speculation that a vaccine that allows a 5-times higher infection rate might somehow reduce transmission to others when you’re infected. Sure it might be true, but we have no reason (yet) to rely on that speculation.

B – People who are less likely to come back for the second shot should get J&J because it is better than a single dose of Pfizer or Moderna.

Sure, get J&J if you’re gonna skip shot#2 of Pfizer/Moderna. (Although J&J is perhaps not better by a huge amount… Pfizer or Moderna still seem to do pretty well (about 50%) after one shot.)

I don’t minimize the importance of wide-spread vaccination to try to reach some sort of herd immunity. But until we reach that herd immunity, it seems to me that the people around you could be at increased net risk if you allow yourself to be stampeded into getting the J&J vaccine if you could have waited a few days and gotten one of the better ones.

I’m neither a doctor nor an epidemiologist. But neither am I afraid to employ a little critical thinking to challenge far-too-glib slogans from bureaucrats who have a quota (and who all got the better vaccine), for whom you and the people around you are merely statistics.

Autonomous Vehicles as “Bad Actors”

As discussed in detail with my pal Dave in early-and-mid 2020, it seems like autonomous cars make a pretty good platform for spying… innocent-looking mobile platforms with video and audio capability, which can be re-routed without suspicion* “a little bit out of the way” into an area of interest with the tweak of an algorithm here and there.

Looks like China is now concerned about similar capabilities with Teslas.

Personally, I hope someone like the Electronic Freedom Foundation ensures laws are created to prohibit the following:

  1. domestic government agencies “rerouting” an AV for purposes of spying, and
  2. advertisers “rerouting” an AV to “happen” to drive by an event or location (such as that BBQ joint filling the air with delicious BBQ smells).

* when your outsource routing decisions to your AV car, you really won’t be aware of the “reason” the AV decided to drive past that suspected drug den or BBQ joint.

Enjoying Cafe du Monde Coffee and Chicory via a Moka Pot

Using a moka pot to make Cafe du Monde Coffee and Chicory requires a few special steps. And don’t forget to accessorize properly for optimal results.

There’s a trick to making it work

Love making my morning coffee with a moka pot. (We have a fully-automatic espresso machine, a french press, a pour-over setup, and a drip machine, but deploying our trusty old-world moka pot named Paolo imparts a special character.)

And I was delighted to find cans of Cafe du Monde’s Coffee and Chicory at a little grocery/everything store in Waialua called Waialua Fresh. (My favorite place to buy papayas and apple bananas and butter lettuce and arugula and mint.)

A moka pot works best with finely ground coffee, whereas Cafe du Monde canned coffee is quite coarse (and can be a little bitter).

Since the canned coffee is so coarse, and a bit bitter, there are a couple of “hacks” that I have found work pretty well:

  1. Fill the moka pot up to the safety valve with very hot water (just shy of boiling) which will help prevent over-heating the grounds during the brewing process, which will reduce bitterness.
  2. Go ahead and over-fill and tamp the grounds in the filter basket (normally a moka- pot no-no, but in this case it is needed due to the coarse grind)
  3. Use as low a heat on the burner as you can get away with (so the coffee comes out of the spout in a thick black drizzle), and remove it from heat the second it starts “gurgling” towards the end of the brewing.
  4. Even if you normally drink black, as I do, consider a little sweetener to counter-balance any bitterness. (I like condensed milk, which lasts forever in the fridge if kept in a lidded pyrex bowl.)
  5. (optional, but recommended) For a more authentic Cafe du Monde experience, enjoy your coffee while wearing Mardi Gras beads.