> It brings the power of edge computing to your own infrastructure.
I like the idea of self-hosting, but it seems fairly strongly opposed to the concept of edge computing. The edge is only made possible by big ass vendors like Cloudflare. Your own infrastructure is very unlikely to have 300+ points of presence on the global web. You can replicate this with a heterogeneous fleet of smaller and more "ethical" vendors, but also with a lot more effort and downside risk.
For most applications 1 location is probably good enough.I assume HN is single location and I am a lomg way from CA but have no speed issues.
Cavaet for high scale sites and game servers. Maybe for image heavy sites too (but self hosting then adding a CDN seems like a low lock in and low cost option)
Honestly, for my own stuff I only need one PoP to be close to my users. And I've avoided using Cloudflare because they're too far away.
More seriously, I think there's a distinction between "edge-style" and actual edge that's important here. Most of the services I've been involved in wouldn't benefit from any kind of edge placement: that's not the lowest hanging fruit for performance improvements. But that doesn't mean that the "workers" model wouldn't fit, and indeed I suspect that using a workers model would help folk architect their stuff in a form that is not only more performant, but also more amenable to edge placement.
> But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.
I don't think that the number of PoPs is the key factor. The key factor is being able to route requests based on a edge-friendly criteria (latency, geographical proximity, etc) and automatically deploy changes in a way that the system ensures consistency.
This sort of projects do not and cannot address those concerns.
Targeting the SDK and interface is a good hackathon exercise, but unless you want to put together a toy runtime to do some local testing, this sort of project completely misses the whole reason why this sort of technology is used.
I agree, latency is very important and 300 pops is great, but seems more for marketing and would see diminishing returns for the majority of applications.
The problem with sandboxing solutions is that they have to provide very solid guarantees that code can't escape the sandbox, which is really difficult to do.
Any time I'm evaluating a sandbox that's what I want to see: evidence that it's been robustly tested against all manner of potential attacks, accompanied by detailed documentation to help me understand how it protects against them.
This level of documentation is rare! I'm not sure I can point to an example that feels good to me.
So the next thing I look for is evidence that the solution is being used in production by a company large enough to have a dedicated security team maintaining it, and with real money on the line for if the system breaks.
Cloudflare needs to worry about their sandbox, because they are running your code and you might be malicious. You have less reason to worry: if you want to do something malicious to the box your worker code is running on, you already have access (because you're self-hosting) and don't need a sandbox escape.
Yes, exactly. The other reason Cloudflare workers runtime is secure is that they are incredibly active at keeping it patched and up to date with V8 main. It's often ahead of Chrome in adopting V8 releases.
I didn’t know this, but there are also security downsides to being ahead of chrome — namely, all chrome releases take dependencies on “known good” v8 release versions which have at least passed normal tests and minimal fuzzing, but also v8 releases go through much more public review and fuzzing by the time they reach chrome stable channel. I expect if you want to be as secure as possible, you’d want to stay aligned with “whatever v8 is in chrome stable.”
Fair point. The V8 isolate provides memory isolation, and we enforce CPU limits (100ms) and memory caps (128MB). Workers run in separate isolates, not separate processes, so it's similar to Cloudflare's model. That said, for truly untrusted third-party code, I'd recommend running the whole thing in a container/VM as an extra layer. The sandboxing is more about resource isolation than security-grade multi-tenancy.
I think you should consider adjusting the marketing to reflect this. "untrusted JavaScript" -> "JavaScript", "Secure sandboxing with CPU (100ms) and memory (128MB) limits per worker" -> "Sandboxing with CPU (100ms) and memory (128MB) limits per worker", overhauling https://openworkers.com/docs/architecture/security.
Over promising on security hurts the credibility of the entire project - and the main use case for this project is probably executing trusted code in a self hosted environment not "execut[ing] untrusted code in a multi-tenant environment".
I don't think what you want us even possible. How would such guarantees even look like? "Hello, we are a serious cybersec firm and we have evaluated the code and it's pretty sound, trust us!"?
"Hello, we are a serious cybersec firm and we have evaluated the code and here are our test with results that proof that we didn't find anything, the code is sound; Have we been through? We have, trust us!"
In terms of a one off product without active support - the only thing I can really imagine is a significant use of formal methods to prove correctness of the entire runtime. Which is of course entirely impractical given the state of the technology today.
Realistically security these days is an ongoing process, not a one off, compare to cloudflare's security page: https://developers.cloudflare.com/workers/reference/security... (to be clear when I use the pronoun "we" I'm paraphrasing and not personally employed by cloudflare/part of this at all)
- Implicit/from other pieces of marketing: We're a reputably company with these other big reputable companies who care about security and are juicy targets for attacks using this product.
- We update V8 within 24 hours of a security update, compared to weeks for the big juicy target of Google Chrome.
- We use various additional sandboxing techniques on top of V8, including the complete lack of high precision timers, and various OS level sandboxing techniques.
- We detect code doing strange things and move it out of the multi-tennant environment into an isolated one just in case.
- We detect code using APIs that increase the surface area (like debuggers) and move it out of the multi-tennant environment into an isolated on just in case.
- We will keep investing in security going forwards.
Running secure multi-tenant environments is not an easy problem. It seems unlikely that it's possible for a typical open source project (typical in terms of limited staffing, usually including a complete lack of on-call staff) to release software to do so today.
Agreed. Cloudflare has dedicated security teams, 24h V8 patches, and years of hardening – I can't compete with that. The realistic use case for OpenWorkers is running your own code on your own infra, not multi-tenant SaaS. I will update the docs to reflect this.
Not if you're self-hosting and running your own trusted code, you don't. I care about resource isolation, not security isolation, between my own services.
Completely agree. There are some apps that unfortunately need to care about some level of security isolation, but with an open workers they could just put those specific workers on their own isolated instance.
What if we hosted the cloud... on our own computers?
I see we have entered that phase in the ebb and flow of cloud vs. self-hosting. I'm seeing lots of echoes of this everywhere, epitomised by talks like this:
To me, the principal differentiator is the elasticity. I start and retire instances according to my needs, and only pay for the resources I've actually consumed. This is only possible on a very large shared pool of resources, where spikes of use even out somehow.
If I host everything myself, the cloud-like deployment tools simplify my life, but I still pay the full price for my rented / colocated server. This makes sense when my load is reasonably even and predictable. This also makes sense when it's my home NAS or media server anyway.
> What if we hosted the cloud... on our own computers?
The value proposition of function-as-a-service offerings is not "cloud" buzzwords, but providing an event-handling framework where developers can focus on implementing event handlers that are triggered by specific events.
FaaS frameworks are the high-level counterpart of the low-pevel message brokers+web services/background tasks.
Once you include queues in the list of primitives, durable executions are another step in that direction.
If you have any experience developing and maintaining web services, you'll understand that API work is largely comprised of writing boilerplate code, controller actions, and background tasks. FaaS frameworks abstract away the boilerplate work.
Forgive the uninformed questions, but given that `workerd` (https://github.com/cloudflare/workerd) is "open-source" (in terms of the runtime itself, less so the deployment model), is the main distinction here that OpenWorkers provides a complete environment? Any notable differences between the respective runtimes themselves? Is the intention to ever provide a managed offering for scalability/enterprise features, or primarily focus on enabling self-hosting for DIYers?
Thanks! Main differences:
1. Complete stack: workerd is just the runtime. OpenWorkers includes the full platform – dashboard, API, scheduler, logs, and self-hostable bindings (KV, S3/R2, Postgres).
2. Runtime: workerd uses Cloudflare's C++ codebase, OpenWorkers is Rust + rusty_v8. Simpler, easier to hack on.
3. Managed offering: Yes, there's already one at dash.openworkers.com – free tier available. But self-hosting is a first-class citizen.
Isn't the whole point of Cloudflare's Workers to pay per function? If it is self-hosted, you must dedicate hardware in advance, even if it's rented in the cloud.
Many companies run selfhosted servers in data centers still need to run software on top of this. Not every company needs to pay people to do things they are capable themselves.
Having options that mimic paid services is a good thing and helps with adoptability.
To the author: The ASCII-art Architecture diagram is very broken, at least on my Pixel phone with Firefox.
These kinds of text-based diagrams are appealing for us techies, but in the end I learned that they are less practical. My suggestion is to use an image, and think of the text-based version as the "source code" which you keep, meanwile what gets published is the output of "compiling" it into something that is for sure always viewable without mistake (that one is where we tend to miss it with ascii-art).
Cool. I always liked CF workers but haven’t shipped anything serious with it due to not wanting vendor lock-in. This is perfect for knowing you always got a escape hatch.
Perhaps it might be helpful to some to also lay out the things that don't work today (or eg roadmap of what's being worked on that doesn't currently work?). Anyway, looks very cool!
Good idea! Main things not yet implemented: Durable Objects, WebSockets, HTMLRewriter, and cache API. Next priority is execution recording/replay for debugging. I'll add a roadmap section to the docs.
I see anything that reduces the relience on vendor lock-in I upvote. Hopefully cloud services see mass exodus so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT.
Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
True, workerd is open source. But the bindings (KV, R2, D1, Queues, etc.) aren't – they're Cloudflare's proprietary services. OpenWorkers includes open source bindings you can self-host.
I tried to run it locally some time ago, but it's buggy as hell when self-hosted. It's not even worth trying out given that CF itself doesn't suggest it.
I'm worrying that the increasing ram prices will drive more people away from local and more to cloud services because if the big companies are buying up all the resources it might not be feasible to self host in a few years
10% is the number I ordinarily see, counting for members of staff and adequate DR systems.
If we had paid our IT teams half of what we pay a cloud provider, we would have had better internal processes.
Instead we starved them and the cloud providers successfully weaponised extremely short term thinking against us, now barely anyone has the competence to actually manifest those cost benefits without serious instability.
I genuinely mean that, fly.io (although as unreliable as it might seem) is already around ~5x to 10x cheaper depending on use case, depending on some services it's actually <infinity> times cheaper because it's actually completely free when you self host!
GCP pricing is absolutely wicked when they charge $120/month for 4vcore 16gb ram, can get around 23 times more performance and 192gb ram for $350/month with Xtbps-ish ddos protection.
I have 2 2x7742 1tb ram each, 3 9950x3ds 192gb ecc, 2 7950x3d's all at <$600/month obv the original hardware cost in the realm of $60k - the epyc cpu's were bought used for around $1k each so not a bad deal, same with ram overall the true cost is <20k. This is entirely for personal use and will last me more than a decade most likely unless there are major gains in efficiency and power cost continues to grow due to AI demand. This also includes 100tb+ hdd of storage and 40tb of nvme storage all connected with 100gbps switch pair for redundancy for a cheap cheap price of $500 for each switch.
I guess I owe some links: (Ignore minecraft focused branding)
Wait what? can you show me some sources to back this up? I assume you are exaggerating but still, what would be the definition of cheap is interesting to know.
I don't think after the fact that ram prices spiked 4-5x that its gonna be cheaper to self host by 100x, Like hetzner's or ovh's cloud offerings are cheap
Plus you have to put a lot of money and then still pay for something like colocation if you are competing with them
Even if you aren't, I think that the models are different. They are models of monthly subscription whereas in hardware, you have to purchase it.
It would be interesting tho to compare hardware-as-a-service or similar as well but I don't know if I see them for individual stuff.
But they have scale. A small company will save less because it’s not that much more work to handle say a 100 node kubernetes cluster vs a 10 node kubernetes cluster.
A small company benefits more than anyone since it's not rocket science to learn these things so you can just put on your system administrator hat once every few weeks, would not be ideal to lose that employee which is why I always suggest a couple of people picking up this very useful skill.
But I don't know much about how it is a real world and normal 9 to 5 I have taken up jobs from system administration to reverse engineering and to even making plugins and infrastructure for minecraft I generally only work these days when people don't have any other choice and need someone who is pretty good at everything so I am completely out of the loop.
Self hosting nowadays is way way way way easier than you're thinking. I'm involved working with various political campaigns and the first thing I help every team do is provision a 10 year old laptop, flash linux, and setup a DDNS. A $100 investment is more than enough for a campaign of 10-20ish dedicated workers that will only be hitting this system one/two users at a time. If I can teach a random 70 year old retiree or 16 year old on how to type a dozen different commands, I'm sure a paid professional can learn too.
People need to realize that when you selfhost you can choose to follow physical business constraints. If no one is in the office to turn on a computer, you're good. Also consumer hardware is so powerful (even 10 year old hardware) that can easily handle 100k monthly active users, which is barely 3k daily users, and I doubt most SMBs actually need to handle anything beyond 500 concurrent users hardware wise. So if that's the choice it comes down to writing better and more performant software, which is what is lacking nowadays.
People don't realize how good modern tooling + hardware has come. You can get by with very little if you want.
I'd bet my years salary that a good 40% of AWS customers could probably be fine with a single self hosted server using basic plug in play FOSS software on consumer hardware.
People in our industry have been selling massive lies on the need for scalability, the amount of companies that require such scalability are quite small in reality. You don't need a rocket ship to walk 2 blocks, and it often feels like this is the case in our industry.
If self hosting is "too scary" for your business, you can buy a $10 VPS but after one single year you can probably find decade old hardware that is faster than what you pay for.
Yea, but admit that I am right that it is not that much harder to manage 100 nodes vs 10 nodes. (At least you can agree you don’t need 10x more staff to manage 100 nodes instead of 10)
That’s the key. If you need one person or 3 persons doesn’t matter. The point is the salaries are fixed costs.
You are right, but it's a feature of Kubernetes actually. If you treat nodes as cattle, then it doesn't matter if there is 10 or 100 or 1000, as long as the apiserver can survive the load and upgrades don't take too long (though upgrades/maintenance can be done slowly for even days without any problems).
But all the stateful crap (like databases) gets trickier and harder the more machines you have.
I'm in your camp but I go for the cheap VPS. Lightsail and DigitalOcean are amazing -- for $10/mo or less you get a cheap little box that's essentially everything you describe, but with all the peace of mind that comes from not worrying about physical security, physical backups, dynamic IPs/DDNS, and running out of storage space. You're right that almost nobody needs most of the stuff that AWS/GCP/Azure can do, but some things in the cloud are worth paying for.
> so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT
How is the cost of NAT free?
> Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
I don't doubt clouds are expensive, but in many countries it'd cost more to DIY for a proper business. Running a service isn't just running the install command. Having a team to maintain and monitor services is already expensive.
salesforce had their hosting bill jump orders of magnitude after ditching their colocation, it did not save anything and colocation staff were replaced with AWS engineers
nat is free to provide because the infrastructure to have NAT is already there and there is never anything maxing out a switch cluster(most switches sit at ~1% usage since they're overspeced $1,000,000 switches), so other than host CPU time managing interrupts (which is unlikely since all network cards offload this).
sure you could argue that regional NAT might should be priced, but these companies have so much fiber between their datacenters that all of nat usage is probably a rounding error.
I think we’re in violent agreement, but you were ambiguous about what “cost” meant. It seems you meant “cost of providing NAT” but I interpreted it as “cost to the customer.”
Good to see this! Cloudflare's cool, but those locked-in things (KV, D1, etc.) always made it hard to switch.
Offering open-source alternatives is always good, but maintainign them is on the community. Even without super-secure multi-tenancy, being able to run the same code on your own stuff or a small VPS without changing the storage is a huge dev experience boost.
Deno core is great and I didn't really abandon Deno – we support 5 runtimes actually, and Deno is the second most advanced one (https://github.com/openworkers/openworkers-runtime-deno). It broke a few weeks ago when I added the new bindings system and I haven't had time to fix it yet. Focused on shipping bindings fast with the V8 runtime. Will get back to Deno support soon.
I'm quite ignorant on the topic (as I never saw the appeal of Cloudflare workers, not due to technical problems but solely because of centralization) but what does DX in "goal has always been the same: run JavaScript on your own servers, with the same DX as Cloudflare Workers but without vendor lock-in." mean? Looks like a runtime or environment but looking at https://github.com/drzo/workerd I also don't see it.
Anyway if the "DX" is a kind of runtime, in which actual contexts is it better than the incumbents, e.g. Node, or the newer ones e.g. Deno or Zig or even more broadly WASI?
DX means Developer Experience, they're saying it lets you use the same tooling and commands to build the workers as you would if they were on CloudFlare.
> Anyway if the "DX" is a kind of runtime, in which actual contexts is it better than the incumbents, e.g. Node, or the newer ones e.g. Deno or Zig or even more broadly WASI?
I'm not the blogger, I'm just a developer who works professionally with Cloudflare Workers. To me the main value proposition is avoiding vendor lock-in, and even so the logic doesn't seem to be there.
The main value proposition of Cloudflare Workers is being able to deploy workers at the edge and use them to implement edge use cases. Meaning, custom cache logic, perhaps some pauthorization work, request transformation and aggregation, etc. If you remove the global edge network and cache, you do not have any compelling reason to look at this.
It's also perplexing how the sales pitch is Rust+WASM. This completely defeats the whole purpose of Cloudflare Workers. The whole point of using workers is to have very fast isolates handling IO-heavy workloads where they stay idling the majority of the time so that the same isolate instance can handle a high volume of requests. WASM is notorious for eliminating the ability to yield on awaits from fetch calls, and is only compelling if your argument is a lift-and-shift usecase. Which this ain't it.
Maybe it's better now but I wouldn't call this first-class support, as you rely on the JS runtime to initialize WASM.
The last time I tried it, the cold start was over 10 seconds, making it unusable for any practical use case. Maybe the tech is not there but given that WASM guarantees the sandboxing already and supports multiple languages, I was hoping we would have providers investing in it.
The problem is that there’s not much of a market opportunity yet. Customers aren’t voting for WASM with their wallets like they are mainstream language runtimes.
Cool project, but I never found the cloudflare DX desirable compared to self hosted alternatives. A plain old node server in a docker container was much easier to manage, use and is scalable. Cloudflare's system was just a hoop that you needed to jump through to get to the other nice to haves in their cloud.
Thanks! Workflows is definitely interesting – it's basically durable execution with steps and retries. It's on the radar, probably after the CLI and GitHub integration.
It's a custom V8 runtime built with rusty_v8, not the actual Cloudflare runtime (github.com/openworkers/openworkers-runtime-v8). The goal is API compatibility – same Worker syntax (fetch handler, Request/Response, etc.) so you can migrate code easily. Under the hood it's completely independent.
This is very nice! Do you plan to hook this up to GitHub, so that a push of worker code (and maybe a yaml describing the environment & resources) will result in a redeploy?
Not yet, but it's one of the next big features. I'm currently working on the CLI (WIP), and GitHub integration with auto-deploy on push will come after that. A yaml config for bindings/cron is definitely on the roadmap too.
I'm also working on execution recording/replay – the idea is to capture a deterministic trace of a request, so you can push it as a GitHub issue and replay it locally (or let an AI debug it).
I like the idea of self-hosting, but it seems fairly strongly opposed to the concept of edge computing. The edge is only made possible by big ass vendors like Cloudflare. Your own infrastructure is very unlikely to have 300+ points of presence on the global web. You can replicate this with a heterogeneous fleet of smaller and more "ethical" vendors, but also with a lot more effort and downside risk.
Cavaet for high scale sites and game servers. Maybe for image heavy sites too (but self hosting then adding a CDN seems like a low lock in and low cost option)
More seriously, I think there's a distinction between "edge-style" and actual edge that's important here. Most of the services I've been involved in wouldn't benefit from any kind of edge placement: that's not the lowest hanging fruit for performance improvements. But that doesn't mean that the "workers" model wouldn't fit, and indeed I suspect that using a workers model would help folk architect their stuff in a form that is not only more performant, but also more amenable to edge placement.
I don't think that the number of PoPs is the key factor. The key factor is being able to route requests based on a edge-friendly criteria (latency, geographical proximity, etc) and automatically deploy changes in a way that the system ensures consistency.
This sort of projects do not and cannot address those concerns.
Targeting the SDK and interface is a good hackathon exercise, but unless you want to put together a toy runtime to do some local testing, this sort of project completely misses the whole reason why this sort of technology is used.
Any time I'm evaluating a sandbox that's what I want to see: evidence that it's been robustly tested against all manner of potential attacks, accompanied by detailed documentation to help me understand how it protects against them.
This level of documentation is rare! I'm not sure I can point to an example that feels good to me.
So the next thing I look for is evidence that the solution is being used in production by a company large enough to have a dedicated security team maintaining it, and with real money on the line for if the system breaks.
Over promising on security hurts the credibility of the entire project - and the main use case for this project is probably executing trusted code in a self hosted environment not "execut[ing] untrusted code in a multi-tenant environment".
> Recently, with Claude's help, I rewrote everything on top of rusty_v8 directly.
worries me
"Hello, we are a serious cybersec firm and we have evaluated the code and here are our test with results that proof that we didn't find anything, the code is sound; Have we been through? We have, trust us!"
Realistically security these days is an ongoing process, not a one off, compare to cloudflare's security page: https://developers.cloudflare.com/workers/reference/security... (to be clear when I use the pronoun "we" I'm paraphrasing and not personally employed by cloudflare/part of this at all)
- Implicit/from other pieces of marketing: We're a reputably company with these other big reputable companies who care about security and are juicy targets for attacks using this product.
- We update V8 within 24 hours of a security update, compared to weeks for the big juicy target of Google Chrome.
- We use various additional sandboxing techniques on top of V8, including the complete lack of high precision timers, and various OS level sandboxing techniques.
- We detect code doing strange things and move it out of the multi-tennant environment into an isolated one just in case.
- We detect code using APIs that increase the surface area (like debuggers) and move it out of the multi-tennant environment into an isolated on just in case.
- We will keep investing in security going forwards.
Running secure multi-tenant environments is not an easy problem. It seems unlikely that it's possible for a typical open source project (typical in terms of limited staffing, usually including a complete lack of on-call staff) to release software to do so today.
I see we have entered that phase in the ebb and flow of cloud vs. self-hosting. I'm seeing lots of echoes of this everywhere, epitomised by talks like this:
https://youtu.be/tWz4Eqh9USc
To me, the principal differentiator is the elasticity. I start and retire instances according to my needs, and only pay for the resources I've actually consumed. This is only possible on a very large shared pool of resources, where spikes of use even out somehow.
If I host everything myself, the cloud-like deployment tools simplify my life, but I still pay the full price for my rented / colocated server. This makes sense when my load is reasonably even and predictable. This also makes sense when it's my home NAS or media server anyway.
(It is similar to using a bus vs owning a van.)
The value proposition of function-as-a-service offerings is not "cloud" buzzwords, but providing an event-handling framework where developers can focus on implementing event handlers that are triggered by specific events.
FaaS frameworks are the high-level counterpart of the low-pevel message brokers+web services/background tasks.
Once you include queues in the list of primitives, durable executions are another step in that direction.
If you have any experience developing and maintaining web services, you'll understand that API work is largely comprised of writing boilerplate code, controller actions, and background tasks. FaaS frameworks abstract away the boilerplate work.
Forgive the uninformed questions, but given that `workerd` (https://github.com/cloudflare/workerd) is "open-source" (in terms of the runtime itself, less so the deployment model), is the main distinction here that OpenWorkers provides a complete environment? Any notable differences between the respective runtimes themselves? Is the intention to ever provide a managed offering for scalability/enterprise features, or primarily focus on enabling self-hosting for DIYers?
Having options that mimic paid services is a good thing and helps with adoptability.
These kinds of text-based diagrams are appealing for us techies, but in the end I learned that they are less practical. My suggestion is to use an image, and think of the text-based version as the "source code" which you keep, meanwile what gets published is the output of "compiling" it into something that is for sure always viewable without mistake (that one is where we tend to miss it with ascii-art).
Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
10% is the number I ordinarily see, counting for members of staff and adequate DR systems.
If we had paid our IT teams half of what we pay a cloud provider, we would have had better internal processes.
Instead we starved them and the cloud providers successfully weaponised extremely short term thinking against us, now barely anyone has the competence to actually manifest those cost benefits without serious instability.
GCP pricing is absolutely wicked when they charge $120/month for 4vcore 16gb ram, can get around 23 times more performance and 192gb ram for $350/month with Xtbps-ish ddos protection.
I have 2 2x7742 1tb ram each, 3 9950x3ds 192gb ecc, 2 7950x3d's all at <$600/month obv the original hardware cost in the realm of $60k - the epyc cpu's were bought used for around $1k each so not a bad deal, same with ram overall the true cost is <20k. This is entirely for personal use and will last me more than a decade most likely unless there are major gains in efficiency and power cost continues to grow due to AI demand. This also includes 100tb+ hdd of storage and 40tb of nvme storage all connected with 100gbps switch pair for redundancy for a cheap cheap price of $500 for each switch.
I guess I owe some links: (Ignore minecraft focused branding)
https://pufferfish.host/ (also offers colocation)
telegram: @Erikb_9gigsofram direct colocation at datacenter (no middlemen / sales) + good low cost bundle deal
anti-ddos: https://cosmicguard.com/ (might still offer colocation?)
anti-ddos: https://tcpshield.com/
I don't think after the fact that ram prices spiked 4-5x that its gonna be cheaper to self host by 100x, Like hetzner's or ovh's cloud offerings are cheap
Plus you have to put a lot of money and then still pay for something like colocation if you are competing with them
Even if you aren't, I think that the models are different. They are models of monthly subscription whereas in hardware, you have to purchase it.
It would be interesting tho to compare hardware-as-a-service or similar as well but I don't know if I see them for individual stuff.
https://basecamp.com/cloud-exit
But I don't know much about how it is a real world and normal 9 to 5 I have taken up jobs from system administration to reverse engineering and to even making plugins and infrastructure for minecraft I generally only work these days when people don't have any other choice and need someone who is pretty good at everything so I am completely out of the loop.
People need to realize that when you selfhost you can choose to follow physical business constraints. If no one is in the office to turn on a computer, you're good. Also consumer hardware is so powerful (even 10 year old hardware) that can easily handle 100k monthly active users, which is barely 3k daily users, and I doubt most SMBs actually need to handle anything beyond 500 concurrent users hardware wise. So if that's the choice it comes down to writing better and more performant software, which is what is lacking nowadays.
People don't realize how good modern tooling + hardware has come. You can get by with very little if you want.
I'd bet my years salary that a good 40% of AWS customers could probably be fine with a single self hosted server using basic plug in play FOSS software on consumer hardware.
People in our industry have been selling massive lies on the need for scalability, the amount of companies that require such scalability are quite small in reality. You don't need a rocket ship to walk 2 blocks, and it often feels like this is the case in our industry.
If self hosting is "too scary" for your business, you can buy a $10 VPS but after one single year you can probably find decade old hardware that is faster than what you pay for.
That’s the key. If you need one person or 3 persons doesn’t matter. The point is the salaries are fixed costs.
But all the stateful crap (like databases) gets trickier and harder the more machines you have.
How is the cost of NAT free?
> Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
I don't doubt clouds are expensive, but in many countries it'd cost more to DIY for a proper business. Running a service isn't just running the install command. Having a team to maintain and monitor services is already expensive.
nat is free to provide because the infrastructure to have NAT is already there and there is never anything maxing out a switch cluster(most switches sit at ~1% usage since they're overspeced $1,000,000 switches), so other than host CPU time managing interrupts (which is unlikely since all network cards offload this).
sure you could argue that regional NAT might should be priced, but these companies have so much fiber between their datacenters that all of nat usage is probably a rounding error.
Please read it again.
> Please read it again.
There’s no need to be rude.
edit: if the idea was to have compatibility with cloudflare workers, workers can run deno https://docs.deno.com/examples/cloudflare_workers_tutorial/
I'm quite ignorant on the topic (as I never saw the appeal of Cloudflare workers, not due to technical problems but solely because of centralization) but what does DX in "goal has always been the same: run JavaScript on your own servers, with the same DX as Cloudflare Workers but without vendor lock-in." mean? Looks like a runtime or environment but looking at https://github.com/drzo/workerd I also don't see it.
Anyway if the "DX" is a kind of runtime, in which actual contexts is it better than the incumbents, e.g. Node, or the newer ones e.g. Deno or Zig or even more broadly WASI?
I'm not the blogger, I'm just a developer who works professionally with Cloudflare Workers. To me the main value proposition is avoiding vendor lock-in, and even so the logic doesn't seem to be there.
The main value proposition of Cloudflare Workers is being able to deploy workers at the edge and use them to implement edge use cases. Meaning, custom cache logic, perhaps some pauthorization work, request transformation and aggregation, etc. If you remove the global edge network and cache, you do not have any compelling reason to look at this.
It's also perplexing how the sales pitch is Rust+WASM. This completely defeats the whole purpose of Cloudflare Workers. The whole point of using workers is to have very fast isolates handling IO-heavy workloads where they stay idling the majority of the time so that the same isolate instance can handle a high volume of requests. WASM is notorious for eliminating the ability to yield on awaits from fetch calls, and is only compelling if your argument is a lift-and-shift usecase. Which this ain't it.
The last time I tried it, the cold start was over 10 seconds, making it unusable for any practical use case. Maybe the tech is not there but given that WASM guarantees the sandboxing already and supports multiple languages, I was hoping we would have providers investing in it.
(1) https://www.rivet.dev/docs/actors/
Recently really enjoying CloudFlare Workflows (used it in https://mafia-arena.com) and would be nice to build Workflows on top of this too.