Node, TLS, and DNS

-Ben

Node presents different problems when using local domain names. In my previous post about reverse proxying, I was able to connect to all my local services with verified domain names. But when I went to have my frontend address my backend as “backend.localhost” instead of “localhost:3256”, I quickly learned that Node works very differently from the web browser.

SSL Certs

For starters, Node uses it’s own hard coded list of root SSL certificates. The only way to add to them is the NODE_EXTRA_CA_CERTS environment variable, which would need to be set with every command! So I’m exporting that from my .zshrc file. It makes it harder to share the configuration, but I can live with it.

DNS resolution

Then I found node won’t resolve *.localhost DNS queries to the local loopback the same way my browser will. The quick and easy way is to edit the /etc/hosts file, and feel free to do that if you want.

But I decided to try a second time to run PiHole. Knowing I couldn’t publish port 53 from docker on my Mac, I booted up a Debian VM, configured a little bridge network, and I’m now running PiHole off that. I set my DNS resolution to that VM’s IP address and I have a slick little solution on my laptop! Debian is running in Low Memory Mode, but PiHole doesn’t seem to be skipping a beat. I’d call this a good solution for my local problem, but I plan to share the hosts file solution with my coworkers, or take it all a step further and put it into docker to use that name resolution.

Vercel serverless

Now that Node can find my services, and trusts them, I thought I’d be all set! Then those Vercel serverless functions started complaining to me again. Turns out, I no longer need to rewrite the Host header! Because I made the VERCEL_URL my frontend.localhost, so that’s the Host header Vercel expects. That actually makes my reverse proxy setup so much cleaner.

Conclusion

My conclusion is that none of this was difficult to set up once, but it doesn’t scale or translate as well to my coworkers’ computers. It’s a lot more work than just docker compose up, which is what I was hoping to achieve. As it is now, my coworkers’ would need to know a little bit more about networking than I wanted them to need to know. A solution to that might be more Docker.

If I containerized all the dev services, I could probably share the Caddy cert via docker volume (No more manual envvar). I could change the reverse proxy to work on container names, instead of localhost:port. Would I still need DNS? It’s maybe the largest problem, since I know I can’t publish the DNS port to my Mac, full stop. But maybe if I took advantage of docker’s networking name resolution, node would get it. Then my inter-container API calls would still be to a different name than accessing from the same host name as my web browser (still going through the reverse proxy), but that might be okay too. Would SSL resolution work between docker containers? That isn’t something I’ve tried.