> this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
Yup. But the same can happen in shared hosting/colo/aws just as easily if only one person controls the keys to the kingdom. I know of at least a handful of open source projects that had to essentially start over because the leader went AWOL or a big fight happened.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
> if only one person controls the keys to the kingdom
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
> 400K would go -fast- if they stuck to a traditional colo setup.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
Colo is when you want to bring your own hardware, not when you want physical access to your devices. Many (most?) colo datacenters are still secure sites that you can't visit.
Ugh. This 100% shows how janky and unmaintained their setup is.
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
As someone that has run many volunteer open source communities and projects for more than 2 decades, I totally get how big "small" wins like this are.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
"I understand this is a volunteer effort, but it's not a good look."
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
I read it a bit differently: you don't need to be a mega-corp with millions of servers to actually make a difference for the better. It really doesn't take much!
The issue isn’t the hardware, it’s the fact that it’s hosted somewhere private in conditions they wont name under the control of a single member. Typically colo providers are used for this.
Eh. It's just a different set of trade-offs unless you start doing things super-seriously like Let's Encrypt.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
100%. Just as an example I have several racks at home, business fiber, battery backup, and a propane generator as a last resort. Also 4th amendment protections so no one gets access without me knowing about it. I host a lot of things at home and trust it more than any DC.
> Also 4th amendment protections so no one gets access without me knowing about it.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
Isn't a business line quite expensive to maintain per month along with a hefty upfront cost? For a smaller team with a tight budget, just going somewhere with all of that stuff included is probably cheaper and easier like a colo DC.
> Also 4th amendment protections so no one gets access without me knowing about it
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Yikes. They don't need a "special arrangement" for those requirements. This is the bare minimum at many professionally run colocation data centers. There is not a security requirement that can't be met by a data center -- being secure to customer requirements is a critical part of their business.
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
A super duper secure locked cabinet acessible only to them or anyone with a bolt cutter.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
I never questioned or thought twice about F-Droid's trustworthiness until I read that. It makes it sound like a very amateurish operation.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
A "single server" covers a pretty large range of scale, its more about how F-droid is used and perceived. Package repos are infrastructure, and reliability is important. A server behind someone's TV is much more susceptible to power outages, network issues, accidents, and tampering. Again, I don't know that's the case since they didn't really say anything specific.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
> It makes it sound like a very amateurish operation.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
"F-Droid is not hosted in a data centre with proper procedures, access controls, and people whose jobs are on the line. Instead it's in some guy's bedroom."
It could just be a colo, there are still plenty of data centres around the globe that will sell you a space in a shared rack with a certain power density per U of space. The list of people who can access that shared locked rack is likely a known quantity with most such organisations and I know in the past we had some details of the people who were responsible for it
In some respects, having your entire reputation on the line matters just as much. And sure, someone might have a server cage in their residence, or maybe they run their own small business and it's there. But the vagueness is troubling, I agree.
A picture of the "living conditions" for the server would go a long way.
> State actor? Gets into data centre, or has to break into a privately owned apartment.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You actually do not have to trust the people who run f-droid for those apps whose maintainers enroll in reproducible builds and multi-party signing, which only f-droid supports unlike any alternatives.
That looks cool, which might just be the point of your comment, but I don't think it actually changes the argument here.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
F-droid makes the most sense when shipped as the system appstore, along with pinned CA keychains as Calyxos did. Ideally f-droid was compiled from source and validated by the rom devs.
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
Modern machines go up to really mental levels of performance when you think about it and for a lot of small scale things like F droid I doubt it takes a lot of hardware to actually host it. A lot of its going to be static files so a basic web server could put through 100s of thousands of requests and even on a modest machine saturate 10 gbps which I suspect is enough for what they do.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
> not hosted in just any data center [...] a long time contributor with a proven track record of securely hosting services
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
I wonder if anyone knows about Droid-ify. Whether it it a safe option, or better to stay away of it?
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
Plus the fact that it's been running for 5 years. Does that mean they bought 7 year old hardware back then? Or is that just when it was last restarted?
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
Good. But I wish PostmarketOS supported more devices. On battery,
tons of kernel patches could be set per device plus a config package in order to achieve the best settings. On software and security...you will find more malware in Play Store than the repos from PmOS/Alpine.
I know it's not a 100% libre (FSF) system, but that's a much greater step towards freedom than Android, where you don't even own your device.
It's frankly embarrassing how many of the comments on this thread are some version of looking at the XKCD "dependency" meme and deciding the best course of action is to throw spitballs at the maintainers of the critical project holding everything else up.
At the very least, it's reasonable to expect the maintainers of such a project to be open about their situation when it's that precarious. Why wouldn't you take every opportunity to let your users and downstream projects know that the dependency you're providing is operating with no redundancy and barely enough resources to carry on when things aren't breaking? Why wouldn't they want to share with a highly technical audience any details about how their infrastructure operates?
They're building all the software on a single server, and at best their fallback is a 12 year old server they might be able to put back in production. I'm not making any unreasonable assumptions, and they're not being forthcoming with any reassuring details.
I think all the criticism of what F-Droid is doing here (or perceived as doing) reflects more on the ones criticising than the ones being criticised.
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
> Another important part of this story is where the server lives and how it is managed. F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
It has hosted quite a few famous services.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
Personally I would feel better about round robin across multiple maintainer-home-hosted machines.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.
IDK if they could bag this kind of grant every year, but isn't this the scale where cloud hosting starts to make sense?
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
Some of their points are valid but way too often they're unable to accept that different services aren't always trying to solve the same problem.
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
Also, even 12-year-old hardware is wicked fast.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
> Also 4th amendment protections so no one gets access without me knowing about it
laughs in FISA
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
I agree that "behind someone's TV" would be a terrible idea.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
Not reassuring.
A picture of the "living conditions" for the server would go a long way.
State actor? Gets into data centre, or has to break into a privately owned apartment.
Criminal/3rd party state intelligence service? Could get into both, at a risk or with blackmail, threats, or violence.
Dumb accidents? Well, all buildings can burn or have an power outage.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
as a year long f-droid user I can't complain
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
Also, no details on the hardware?
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
> The previous server was 12 year old hardware
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
A Dell R620 is over 12 years old and WAY faster than a RPi 5 though...
Sure, it'll be way less power efficient, but I'd definitely trust it to serve more concurrent users than a RPi.
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
https://www.amazon.com/Timetec-Premium-PC4-19200-Unbuffered-...
https://www.amazon.com/MSI-MAG-B550-TOMAHAWK-Motherboard/dp/...
For a server that's replacing a 12 year old system, you don't need DDR5 and other bleeding edge hardware.
Saying this on HN, of course.
assumptions
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...