Everything is accessible through VPN (Wireguard) only
Same. Always on VPN on phone for on the go ad blocking via pihole.
Same here. Taught my wife how to start WireGuard on her android phone and then access any of the services I run. This way I only have one port open and don’t have to worry too much.
How about running your wireguard server on a VPS and then connecting to the same interface as clients from your mobile and home network? No ports open on your side!
That’s what I do. The beauty of wireguard is that it won’t respond at all if you don’t send the right key. So from the outside it will appear as if none of your ports are open.
The only externally accessible service is my wireguard vpn. For anything else, if you are not on my lan or VPN back into my lan, it’s not accessible.
This is the way.
Funnily enough it’s exactly the opposite way of where the corporate world is going, where the LAN is no longer seen as a fortress and most services are available publically but behind 2FA.
Corporate world, I still have to VPN in before much is accessible. Then there’s also 2FA.
Homelab, ehhh. Much smaller user base and within smackable reach.
Oh right. The last three business I’ve worked in have all been fully public services; assume the intruder is already in the LAN, so don’t treat it like a barrier.
Can I ask your setup? I’d like to get this for myself as well.
Not OP but… I have an old PC as a server, Wireguard in docker container, port-forward in the router and that’s it
Which image? I’ve seen a few wireguard options on docker hub
Linuxserver
Sorry, haven’t logged on in a bit. I use OPNSense on an old PC for my firewall with the wireguard packet installed.
Then use the wireguard client on my familys phones/laptops that is set to auto connect when NOT on my home wifi. That way media payback, adguard-home dns and everything acts as seamless as possible even when away while still keeping all ports blocked.
Nothing I host is internet-accessible. Everything is accessible to me via Tailscale though.
something like 95% stays local and is remote accessed via wireguard, The rest is stuff I need to host via a hostname with a trusted cert because apps I use require that or if I need to share links to files for work, school etc. For the external stuff I use Cloudflare tunnels just because I use DDNS and want to avoid/can’t use port forwarding. works well for me.
Just in case you missed this, you can issue valid HTTPS Certificates with the DNS challenge. I use LetsEncrypt, DeSEC and Traefik, but any other supported provider with Lego (CLI) would work.
Everything is behind a wireguard vpn for me. It’s mostly because I don’t understand how to set up Https and at this point I’m afraid to ask so everything is just http.
I’ve been using YunoHost, which does this for you but I’m thinking of switching to a regular Linux install, which is why I’ve been searching for stuff to replace YunoHost’s features. That’s why I came across Nginx Proxy Manager, which let’s you easily configure that stuff with a web UI. From what I understand it also does certificates for you for https. Haven’t had the chance to try it out myself tho because I only found it earlier today.
NPM is nice and easy to use.
Its not hard really, and you shouldn’t be afraid to ask, if we don’t ask then we don’t learn :)
Look at Caddy webserver, it does automated SSL for you.
Thank you. It was mostly ment as a joke tho. I’m not actually afraid to ask, but more ignorant because it’s all behind VPN and that’s just so much easier and safer and I know how to do it so less effort. Https is just magic for me at the moment and I like it that way. Maybe one day I’ll learn the magic spells but not today.
Careful with Caddy as its had a few security issues.
All software has issued, such is the nature of software. I always say if you selfhost, at least follow some security related websites to keep up to date about these things :)
Do you have any suggestions for reputable security related websites?
too many :) Here is a snippet of my RSS feed, save it as an xml file and most rss reeders should be able to import it :) https://pastebin.com/q0c6s5UF
few days late here, but that pastebin had some really good feeds 🙏 I noticed the OPML file was labeled FreshRSS and I also use FreshRSS. So I fixed up the feeds and configured FreshRSS to scrape the full articles (when possible) and bypass ads, tracking and paywalls.
I figured I’d pay it forward by sharing my revised OPML file.
I also included some of my other feeds that are related (if you or anyone else is interested).
Some of the feeds are created from scratch since a few if these sites don’t offer RSS, so if the sites change their layout the configs may need to be adjusted a bit, but in my experience this rarely happens.
I had to replace some of the urls with publicly hosted versions of the front-ends I host locally and scrape, but feel free to change it up however you like.
https://gist.akl.ink/Idly9231/22fd15085f1144a1b74e2f748513f911
Thank you :)
I had everything behind my LAN, but published things like Nextcloud to the outside after finally figuring out how to do that even without a public IPv4 (being behind DS-Lite by my provider).
I knew about Cloudflare Tunnels but I didn’t want to route my stuff through their service. And using Immich through their tunnel would be very slow.
I finally figured out how to publish my stuff using an external VPS that’s doing several things:
- being a OpenVPN server
- being a cert server for OpenVPN certs
- being a reverse proxy using nginx with certbot
Then my servers at home just connect to the VPS as VPN clients so there’s a direct tunnel between the VPS and the home servers.
Now when I have an app running on 8080 on my home server, I can set up nginx so that the domain points to the VPS public IPv4 and IPv6 and that one routes the traffic through the VPN tunnel to the home server and it’s port using the IPv4 of the VPN tunnel. The clients are configured to have a static IPv4 inside the VPN tunnel when connecting to the VPN server.
Took me several years to figure out but resolved all my issues.
I’m interested in why you’re terminating TLS on your VPS instead of doing it on your home network
What benefit does it have instead of getting a dynamic DNS entry and port forwarding on your internet connection?
With DS-Lite you don’t have a public IPv4. Not a static one but also not a dynamic one. The ISP just gives you a public IPv6. You share your IPv4 address with other users. This is done to use less IPv4s. But not having a dynamic IPv4 causes you to be unable to use DynDNS etc. It’s simply not possible.
You could publish your stuff via IPv6 only but good luck accessing it from a network without IPv6.
You could also spin up tunnels with SSH actually between a public server and the private one (yes SSH can do stuff like that) but that’s very hard to manage with many services so you’re better of building a setup like mine.
Thanks for the great explanation!
deleted by creator
I expose most things to the web so long as they have auth and 2FA options. The one exception being my Jellyfin server. I share it with friends and needed to make it as easily accessible as possible.
With Cloudflare WAF, reverse proxy, and an isolated subnet with IDP I feel comfortable with public services. Nothings perfect but if they get through it and pwn my lab I’ll just nuke it and rebuild.
Everything exposed except NFS, CUPS and Samba. They absolutely cannot be exposed.
Like, even my DNS server is public because I use DoT for AdBlock on my phone.
Nextcloud, IMAP, SMTP, Plex, SSH, NTP, WordPress, ZoneMinder are all public facing (and mostly passworded).
A fun note: All of it is dual-stacked except SSH. Fail2Ban comparatively picks up almost zero activity on IPv6.
Nearly all of them. Nextcloud, Jellyfin, Vaultwarden, Spacebar, and 2fAuth, all set behind an NGINX Reverse Proxy, SWAG. SWAG made it very easy to set up https and now I can throw anything behind a subfolder or subdomain.
100% is lan only cause my isp is a cunt
Ah, CG-NAT, is it? There are workarounds
NAT to extremes… it’s Starlink so I think I’m almost completely obfuscated from the internet entirely.
quite frankly i don’t really host anything that needs to be accessible from the general Internet so I never bothered with workarounds.
Tailscale with the Funnel feature enabled should work for most ISPs, since it’s setup via an outbound connection. Though maybe they’re Super Cunts and block that too.
Available to the internet via reverse proxy:
- Jellyfin
- Navidrome
- Two websites
- matrix chat server
- audiobookshelf
LAN only:
- homepage
- NGINX Proxy Manager
- Portainer
There’s more in both categories but I can’t remember everything I have running.
What is homepage? I’m testing homarr right now (assuming it’s similar) but haven’t set on it yet
I believe it’s this
I’ve been eyeing it myself
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HA Home Assistant automation software ~ High Availability HTTP Hypertext Transfer Protocol, the Web HTTPS HTTP over SSL IMAP Internet Message Access Protocol for email IP Internet Protocol NAS Network-Attached Storage NAT Network Address Translation Plex Brand of media server package SMTP Simple Mail Transfer Protocol SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
[Thread #549 for this sub, first seen 26th Feb 2024, 21:45] [FAQ] [Full list] [Contact] [Source code]
Jellyfin and Miniflux are internet facing because it would be turbo annoying otherwise to deal with them
Unlike most here, I’m not as concerned with opening things up. The two general guidelines I use are 1. Is it built by a big organization with intent to be exposed, and 2. What’s the risk if someone gets in.
All my stuff is in docker, so compartmentalized with little risk of breaking out of the container. Each is on it’s own docker network to the reverse proxy, so no cross-container communication unless part of the same stack.
So following my rules, I expose things like Nextcloud and Mediawiki, and I would never expose Paperless which has identity documents (access remotely via Tailscale). I have many low-risk services I expose on demand. E.g. when going away for a weekend, I might expose FreshRSS so I can access the feed, but I’d remove it once I got home.
Doesn’t Nextcloud running in Docker want the socket exposed?
I googled around for an example https://book.hacktricks.xyz/linux-hardening/privilege-escalation/docker-security/docker-breakout-privilege-escalation.
Ignore me if you’ve already hardened the containers.
I’ve never known a reason to expose the docker socket to Nextcloud. It’s certainly not required, I’ve run Nextcloud for years without ever granting it socket access.
Most of the things on that linked page seem to be for Docker rather than Nextcloud, and relate to non-standard configuration. As someone who is not a political target, I’d be pretty happy that following Nextcloud’s setup guide and hardening guide is enough.
I also didn’t mention it, but I geoblock access from outside my country as a general rule.
I was looking into setting up Nextcloud recently and the default directions suggest exposing the socket. That’s crazy. I checked again just now. I see it is still possible to set it up without socket access, but that set of instructions isn’t as prominent.
I linked to Docker in specific because if Nextcloud has access to the socket, and hackers find some automated exploit, they could easily escalate out of the Docker container. It sounds like you have it more correctly isolated.
Was it Nextcloud or Nextcloud All in One? I’ve just realised that the Nextcloud docker image I use is maintained by Docker, not Nextcloud. It’s this one: https://hub.docker.com/_/nextcloud/
I use Docker-compose and even the examples there don’t have any socket access.
The all in one image apparently uses Traefik, which seems weird to use an auto configuring reverse proxy for an all in one image where you know the lay of the land. Traefik requires access to the docker socket for auto configuration. But you can proxy the requests to limit access to only what it needs if you really want to use it.
What I was looking at was the All in One, yes. I didn’t realize there was a separate maintained image, thank you! I’d much rather have a single image without access to the socket at all, I’ll give that a shot sometime.
One warning: in my experience, you can not jump two major versions. Not just it won’t work, but that if you try it everything will break beyond repair and you’ll be restoring from a backup.
Two major versions can sometimes be a matter of a few months apart, so make sure you have a regular update schedule!
(Also, people say never update to a X.0 release, the first version of a major release often has major bugs).
TL;DR don’t take too long to update to new releases, and don’t update too quickly!
Also, the docker image is often a day or so behind the new release, soNextcloud tells you an update is available but often you then need to wait until the next day to get the updated docker image. I guess this is because (as I’ve just learnt) the image is built by Docker not Nextcloud.