It’s generally more like “Steve’s 10 eur/mo cloud server in which they run ten other things next to Lemmy, which is written by two devs and barely held together by duct tape and prayers”
Honestly, it’s negligent if a major company does host their own servers at this point. Big cloud server companies specialize in that and can do it better than others, with better guarantees of stability and maintenance. Pretty much the reason people specialize in everything else.
What you’re saying here is literally a punchline in infosec because of how many breaches are down to incompetent cloud service providers, because said cloud service providers take security about as seriously as the aforementioned c-suite does.
*EDIT No, the c-suite thing doesn’t make sense. Shut up. I recast this post and removed a bit. I don’t need your approval. I DRIVE A DODGE STRATUS
You are entirely ignorant of how anything works. There’s no “liability” unless they seriously fuck a goat. Downtime is expected and, in fact, built into contracts. X amount of downtime for service, Y amount for unforeseen circumstances, Z amount for shiggles. There may be some prorating built into it, but even that will be after a certain amount of downtime.
No matter how you slice it the only reason anyone uses cloud services is to cut costs. There actual facts simply do not pan out when you’re talking about security.
Those contracts are exactly what I mean. A certain, small amount of downtime is allowed for, and it’s expected to be fixed shortly. If either of those things aren’t true, then the business is in breach of that agreement.
But I know where OC was coming from, 15-25 years ago it would have been the crap old laptop, the cardboard box server, the DEC PDP-11 the University is still powering for some reason.
The instance I’m replying from is a 5 eur/mo box from Hetzner.
Your main concerns are gonna be active user count & storage space. Especially if you decide to allow image or god forbid video uploads. Having a bunch of inactive users aren’t going to affect costs that much as long as they don’t have, like, a milion subscriptions. (If they’re all subscribed to the same community things will “deduplicate”)
There are many guides on getting started with Linux servers as a whole. I recommend installing Debian Bookworm on a virtual machine or a spare laptop at first and going through the writeups all major cloud providers have, just to get a feel for using the terminal & initial setup (SSH hardening and reverse proxy configuration and so on)
After getting an initial feel for Linux admining, start reading up on Docker, Docker Compose, and containers in general. Avoid Podman until you’re experienced with Docker as it’s just different enough to trip you up. You can also check out LXC/LXD although it’s way less popular.
Oh, and speaking of Docker: UFW AND DOCKER WILL NOT WORK TOGETHER! DOCKER BYPASSES UFW (just making sure you don’t learn this until it’s too late)
Be careful of guides that are old (even a year makes a difference) or for different “distros” than the one you have. An exception for the second case is the Arch Linux wiki, which is one of the best resources just in general, aside from a few Arch specific bits like the exact package names to install. You should also use Arch’s “man pages” reference, as they’re built from the latest versions of packages compared to other man page renderers that are frequently outdated (like die.net)
Lemmy itself is harder to get right because the instructions so far are intended for people who kinda know what they’re doing, but once you have the base Linux admin knowledge, it won’t be that hard to pick up the parts necessary to get working with something like Lemmy.
Do you have any specific resources or suggestions? I’m a software dev with lots of DigitalOcean experience looking to host my own instance. Also, can you log in to wefwef through your instance, or how do you access everything, specifically on mobile?
Depending on how well you know your way around, my recommendation is to not use the Ansible setup but instead treat it as documentation while doing things your way. It has quite a bit of strange stuff going on (postfix? two nginx installs with only one being in a container?) and seems to be missing important things such as SSH hardening. It also assumes it’ll be the only thing running in your server just in general (horrible yet common practice, unfortunately) so if you have anything set up it may or may not clobber over it to do things it’s own way, and end up breaking something.
Also, can you log in to wefwef through your instance, or how do you access everything, specifically on mobile?
I haven’t tried wefwef in particular but all native apps I tried work just fine. An issue I can see cropping up from wefwef is that Lemmy’s CORS policies are way too restrictive by default. No idea if they do any kind of proxying to get around that but that would be the main issue I’d imagine.
On a more serious note… I’m not sure if much has changed since then (probably, things have been moving fast…), but lemmy.world was hosted on about a $150 / mo server:
To be fair, Reddit is a lot bigger than any Lemmy instance, and Lemmy instances have the benefit of being decentralised, so the load is on many different servers owned by different people as opposed to one group of servers owned by one company.
Reddit’s database was pretty poorly designed. They designed it to be really flexible so they could make changes easily early on, but it was highly inefficient. I don’t know if it’s still like that, but the old website’s source code is public and it is very inefficient.
I want to be mad but FFS Reddit had Conde Nast money for most of its shittery so they had NO excuse except incompetence.
At least Fediverse servers are typically Steve’s old laptop or some shit so it’s understandable.
It’s generally more like “Steve’s 10 eur/mo cloud server in which they run ten other things next to Lemmy, which is written by two devs and barely held together by duct tape and prayers”
But that doesn’t change the overall point.
Good point. Who the hell hosts their own server anymore?
Honestly, it’s negligent if a major company does host their own servers at this point. Big cloud server companies specialize in that and can do it better than others, with better guarantees of stability and maintenance. Pretty much the reason people specialize in everything else.
What you’re saying here is literally a punchline in infosec because of how many breaches are down to incompetent cloud service providers, because said cloud service providers take security about as seriously as the aforementioned c-suite does.
*EDIT No, the c-suite thing doesn’t make sense. Shut up. I recast this post and removed a bit. I don’t need your approval. I DRIVE A DODGE STRATUS
Lol what? Every server has down time. But the big cloud companies have actual liability for theirs
You are entirely ignorant of how anything works. There’s no “liability” unless they seriously fuck a goat. Downtime is expected and, in fact, built into contracts. X amount of downtime for service, Y amount for unforeseen circumstances, Z amount for shiggles. There may be some prorating built into it, but even that will be after a certain amount of downtime.
No matter how you slice it the only reason anyone uses cloud services is to cut costs. There actual facts simply do not pan out when you’re talking about security.
Businesses chose cloud providers because they think that it will cut costs.
Those contracts are exactly what I mean. A certain, small amount of downtime is allowed for, and it’s expected to be fixed shortly. If either of those things aren’t true, then the business is in breach of that agreement.
Anyway no u r ignorant. Peace out
Removed by mod
But I know where OC was coming from, 15-25 years ago it would have been the crap old laptop, the cardboard box server, the DEC PDP-11 the University is still powering for some reason.
… the PDP-11 😂
it’s that cheap? If I spun up an instance and paid less than $150 how many users would I be able to have before it implodes?
The instance I’m replying from is a 5 eur/mo box from Hetzner.
Your main concerns are gonna be active user count & storage space. Especially if you decide to allow image or god forbid video uploads. Having a bunch of inactive users aren’t going to affect costs that much as long as they don’t have, like, a milion subscriptions. (If they’re all subscribed to the same community things will “deduplicate”)
What’s the learning curve like? That honestly seems like a much bigger hurdle than cost.
There are many guides on getting started with Linux servers as a whole. I recommend installing Debian Bookworm on a virtual machine or a spare laptop at first and going through the writeups all major cloud providers have, just to get a feel for using the terminal & initial setup (SSH hardening and reverse proxy configuration and so on)
After getting an initial feel for Linux admining, start reading up on Docker, Docker Compose, and containers in general. Avoid Podman until you’re experienced with Docker as it’s just different enough to trip you up. You can also check out LXC/LXD although it’s way less popular.
Oh, and speaking of Docker: UFW AND DOCKER WILL NOT WORK TOGETHER! DOCKER BYPASSES UFW (just making sure you don’t learn this until it’s too late)
Be careful of guides that are old (even a year makes a difference) or for different “distros” than the one you have. An exception for the second case is the Arch Linux wiki, which is one of the best resources just in general, aside from a few Arch specific bits like the exact package names to install. You should also use Arch’s “man pages” reference, as they’re built from the latest versions of packages compared to other man page renderers that are frequently outdated (like die.net)
Lemmy itself is harder to get right because the instructions so far are intended for people who kinda know what they’re doing, but once you have the base Linux admin knowledge, it won’t be that hard to pick up the parts necessary to get working with something like Lemmy.
Do you have any specific resources or suggestions? I’m a software dev with lots of DigitalOcean experience looking to host my own instance. Also, can you log in to wefwef through your instance, or how do you access everything, specifically on mobile?
Depending on how well you know your way around, my recommendation is to not use the Ansible setup but instead treat it as documentation while doing things your way. It has quite a bit of strange stuff going on (postfix? two nginx installs with only one being in a container?) and seems to be missing important things such as SSH hardening. It also assumes it’ll be the only thing running in your server just in general (horrible yet common practice, unfortunately) so if you have anything set up it may or may not clobber over it to do things it’s own way, and end up breaking something.
I haven’t tried wefwef in particular but all native apps I tried work just fine. An issue I can see cropping up from wefwef is that Lemmy’s CORS policies are way too restrictive by default. No idea if they do any kind of proxying to get around that but that would be the main issue I’d imagine.
Only one way to find out ;)
On a more serious note… I’m not sure if much has changed since then (probably, things have been moving fast…), but lemmy.world was hosted on about a $150 / mo server:
https://blog.mastodon.world/ https://www.hetzner.com/dedicated-rootserver/matrix-ax (it’s the most expensive option here)
That’s pretty beefy. You could probably get away with much less for a smaller instance.
But…but…spez said it will cause 200k per month!
To be fair, Reddit is a lot bigger than any Lemmy instance, and Lemmy instances have the benefit of being decentralised, so the load is on many different servers owned by different people as opposed to one group of servers owned by one company.
That’s because he wants all 55 million active users accessing his servers so he shove ads down their throats.
it’s that cheap? If I spun up an instance and paid less than $150 how many users would I be able to have before it implodes?
About tree fiddy
Reddit’s database was pretty poorly designed. They designed it to be really flexible so they could make changes easily early on, but it was highly inefficient. I don’t know if it’s still like that, but the old website’s source code is public and it is very inefficient.