Local DNS resolution for server?
I have to preface this question with a disclaimer that I am an eager learner of Linux and servers in general, but I'm still a beginner and often run into roadblocks.
Current setup:
- Raspberry Pi 3 with Adguard Home acting as primary DNS
- unRAID server with Adguard Home acting as secondary DNS
- About a dozen other containers running on same server
- DHCP is handled by my router
Goal:
- provide local DNS names for the containers running in unRAID so I don't have to enter IP:Port (e.g, calibre.local) which also has the side benefit saving the various username/password combos into Bitwarden with an actual domain attached to it instead of 14 occurrences of 192.168.x.x
Additional info:
I had PiHole running on the Pi before as the primary and only DNS previously. And I seem to recall you could put IP:Port as a custom DNS and have it resolve. AGH does have a feature for DNS Rewrites but it does not allow for port numbers, IPs only. I switched to AGH because it seems to be more effective at blocking ads, which is likely more a function of the provided DNS blocklists out of the box as opposed to what I was doing in Pihole. I would prefer to stick with AGH for adblocking/DNS if possible.
I looked into just modifying host files on the main computers I touch these apps from, but again, can't include port. What is a good solution for this? Preferably something approachable for a newb like me.
You'll need a bit more than just DNS records here.
When it comes to address resolution the main record types queried from a DNS server are the A (IPv4) and AAAA (IPv6) records. These only provide a mapping for host names to addresses; there's no way within those specific record types to additionally specify a port number.
Other record types such as SRV can include port numbering information, but within the context of accessing services via a browser those record types aren't going to be queried when accessing a website.
A solution as such is to set up a web server that listens on the default port(s) of the address that the DNS record is pointing to, and to then configure the web server to act as a reverse proxy that forwards requests through to service(s) that can be listening on any arbitrary IP and port combination.
The nature of this configuration can unfortunately be quite service-specific, as the web server may need to be additionally told to add or rewrite certain HTTP headers so that the backend application can correctly process the forwarded request.
To build off of this, I don't use Unraid myself but I do use Nomad as an orchestrator. My solution is to use Traefik as a reverse proxy that will automatically map subdomains to services running on Nomad. The domain is publicly available and I manage the DNS records via Cloudflare. All of this happens on a single box.
How this may work internally is that you use something like dnsmasq or pihole to manage an internal record and Traefik would handle the subdomain mappings. I don't have that setup yet as my internal applications have static port mappings and I have them memorized. I'll probably setup that up eventually.
I see....I think. And this is needed even for internal queries to containers on the server? Just to ease admin tasks while at home. I already have Cloudflare tunnels pointed to the 2 containers I want to actually access outside my network and that is working well. And while I have the unraid server's IP memorized, the ports slip my mind regularly so I end up launching the unraid web UI, then go to the docker section and click "web UI". Would like to just put server.local or pi.local, etc.
If I'm understanding the question correctly then yes, I believe a reverse proxy is likely to be the most efficient way to meet your stated goals even for "internal" queries (which I'm assuming means "queries from arbitrary devices on my home network").
As it's no longer 2:30am I'll elaborate a little bit more with an example from my current setup.
My setup consists of services running directly on a single mini PC with Ubuntu 22.04 as the OS. I'm not using and have no experience at all with unRAID, and I'm not using any form of containerization software such as Docker. The avoidance of containerization in my case is entirely just down to personal preferences rather than any technical reason.
On that system is PiHole is installed and acts as both the DHCP and DNS server for my entire home network. I've added a single custom DNS record (which I'll refer to as
home.example.org
) that resolves to the static IP that I've assigned to that the system.One difference between us is that I'm not using the
.local
TLD but rather just a private subdomain of a domain name that I own. I'm the king of the castle when it comes to that domain, so I define what is and isn't valid :)Next up is the reverse proxy - in this case I'm using nginx. Alternatives like Caddy and Traefik exist and may compose better with containers, however I've no experience with those. The relevant part of my configuration are the two files you can find in this Gist.
The nginx server is listening on the default ports of 80 and 443 and so responds to all requests sent straight to
home.example.org
. HTTP (80) simply redirects to HTTPS (443), though that's handled somewhere else in my configuration and isn't shown in the above link. For HTTPS nginx is providing TLS termination through an automatically-managed LetsEncrypt certificate obtained through a DNS-01 challenge.As I'm only using a single DNS record I've set up nginx such that access to individual services is
/path
based; for example I'll navigate tohttps://home.example.org/jellyfin
if I want to access my Jellyfin installation. Nginx then forwards these requests to the individual services which I've configured to accept standard (non-encrypted) HTTP requests and bind their ports to a local-only network interface; either the loopback (127.0.0.1) adapter or a veth interface pair for services that are cross-communicating between network namespaces (172.16.0.{1,2}).Ok, for whatever reason that Gist link clicked with my brain. I had looked at reverse proxy before deciding to use CF tunnels to reach my few internet available apps hosted on the local server. I didn't put together that it could also be leveraged locally. Thank you! I have some reading to do.
I think while possible the way Meorawr described it the question is really "can your use case be solved with something more simple"? Like, say, bookmarks - you can add them in a way that just entering a short word in your address bar punts you to a URL of arbitrary complexity.
That is certainly an option as a low friction approach. While having Bitwarden save username/pwd for the various apps would be nice, it's not a deal breaker. Not sure how that would work with the bookmark idea.
There's really fine grained settings in the bitwarden browser extension (assuming it's the same as in Firefox everywhere) where you can adjust what is considered a match for a URL, and I'm preeeetty sure that the port can be part of it. So you should be able to have bitwarden show you the matching account credentials even if they all visibly live on the same IP.
Ooh, interesting. I'll try that out, thank you.
I think that if you want to replace a host:port by a (local) hostname, a reverse proxy is what you're looking for.
HAProxy or nginx will do that just fine, including rewriting the HTTP headers. All the DNS names would resolve to the reverse proxy.
I have Unraid set up with Unbound and Pi-hole, then Nginx Proxy Manager for hostnames. You can't use FQDN for the DNS server, it has to be IP. You can use NPM to access Adguard or Pihole or whatever using a FQDN.
For my setup, I had to use acme.sh to get NPM a wildcard cert for my domain and now I can use it's proxy host to create the names. They don't work outside my lan, but I could set up plex or whatever if I wanted remote access.