fwiw, Bunny are the people that announced S3 compatibility for their object storage in Q2 2022 [1]
> We can’t wait to have this available as a preview later in Q2 and truly make global storage a breeze, so keep an eye out!
then apologised for missing that in September 2023 [2]
> We initially announced that we were working on S3 support for Bunny Storage all the way back in 2022. Today, as 2023 is slowly coming to an end, many of our customers continue to follow our blog, hoping for good news about the release.
changing the roadmap to early 2024 [2]
> But we are working aggressively toward shipping S3 compatibility in early 2024.
That same post also has the beautiful "At bunny.net, we value transparency." quote.
It's early 2026, and they're literally ignoring my support requests asking about what the roadmap is looking like for this now.
So, do not trust their product or leadership at all.
Yeah I'm in the same boat. I was pretty excited to bring stuff over from Cloudflare but the missing S3 compat. and the communication around that was (and still is) a dealbreaker for me.
Asking because I was looking at both Cloudflare and Bunny literally this week...and I feel like I don't know anything about it. Googling for it, with "hackernews" as keyword to avoid all the blogspam, didn't bring up all that much.
(I ended up with Cloudflare and am sure that for my purposes it doesn't matter at all which I choose.)
- The free CDN is basically unusable with my ISP Telekom Germany due to a long-running and well documented peering dispute. This is not necessarily an issue with Cloudflare itself, but means that I have to pay for the Pro plan for every domain if I want to have a functioning site in my home country. The $25 per domain / project add up.
- Cloudflare recently had repeated, long outages that took down my projects for hours at a time.
- Their database offering (D1) had some unpredictable latency spikes that I never managed to fully track down.
- As a European, I'm trying to minimize the money I spent on US cloud services and am actively looking for European alternatives.
You don‘t have to get the Pro plan to solve the Deutsche Telekom issues. You can also use their Argo product for $5/month - but only makes sense if your egress costs wouldn‘t exceed the pro plans pricing.
The reverse. Argo gives better peering than any paid plan. Its the reason for the product‘s existence. They can use more costly peering that they couldn‘t use with their free egress model.
Thanks for the pointer, not doubting that is true. My egress is unfortunately too large for it to make financial sense.
However, at the time I did plenty of trace routes to confirm that the Pro plans peering is at least better than the Free plan for the Telekom problem. Free plan would route traffic to NYC and back, while Pro plan traffic terminates in Frankfurt.
> This feature is currently in the closed beta stage. It is not available for use currently, but it's expected to be in the near future. We appreciate your interest in it and will mark your ticket so we can notify you when it's available.
You left out the part where they realized they couldn't ship S3 compatibility without rebuilding their storage service. So they have decided to rebuild their storage service. Not really a small project. So I can see how its taking longer. At least they were transparent about it.
I've been struggling with Bunny the last couple of days.
Their log delivery api is delayed by over 3 days, despite them promising only "up to 5 minutes delay" in their docs: https://docs.bunny.net/cdn/logging
Why isn't it on the status page you might ask? Oh, that's because a delay is not "critical", but I fear I am losing loglines now, their retention is 3 days.
It's an interesting strategy for them, because it doesn't inspire confidence in me about their other offerings. When they can't reliably operate a log delivery API or be transparent about issues, it's hard to trust them with something as critical as a database.
I can't take this seriously. You've got the backend of some networked application and now you want to offload the database to a SaaSS which has... "SQLite compatibility"?
SQLite is an embedded database. Every Android app for instance gets a db through sqlite for free. You can easily compile it yourself and embed it in pretty much anything that has a CPU.
So you'd like to offload this embeddable database that's nowhere near postgres or even maria, and then it even says:
> Why care about database latency anyway?
Right. Because you can't even begin to think about introducing a network stack between your application and the database unless you can't be bothered all about the latency.
> Maybe I'm not the target market for this, but how hard is it REALLY to manage a RDBMS?
It depends:
- do you want multi region presence
- do you want snapshot backups
- do you want automated replication
- do you want transparent failover
- do you want load balancing of queries
- do you want online schema migrations with millisecond lock time
- do you want easy reverts in time
- do you want minor versions automatically managed
- do you want the auth integrated with a different existing system
- do you want...
There's a lot that hosted services with extra features can give you. You can do everything on the list yourself of course, but it will take time and unless you already have experience, every point can introduce some failure you're not aware of.
> There's a lot that hosted services with extra features can give you.
I totally agree with that, but in my experience 99% of "application developers" don't need all these features. Of those you listed, I only see "backups" as a requirement. Everything else is just - what I said - features for when your application is successful and you want something streamlined.
I would have no concerns around reliability uptime running my own database.
I would have concerns around backups (ensuring that your backups are actually working, secure, and reliable seems like potentially time intensive ongoing work).
I also don't think I fully understand what is required in terms of security. Do I now have to keep track of CVEs, and work out what actions I need to in response to each one? You talk about firewall rules. I don't know what is required here either.
I'm sure it's not too hard to hire someone who does know how to do these things, but probably not for anything close to the $50/month or whatever it costs to run a hosted database.
Backups are a PITA I wanted to go exactly this route but even though I had VMs and compute I can't let any production data hit it without bullet proof backups.
I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.
If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.
It's not about it being hard, it's about delegating. Many companies are a bit less sensitive to pricing and would rather pay monthly for someone else to keep their database up, rather than spending engineering hours on setting up a database, tuning it, updating it, checking its backups, monitoring it and making it scale if needed.
Sure, any regular SME can just install Postgres or MySQL without even setting much up except with `mysql_secure_install`, a user with a password and an 'app' database. But you may end up with 10-20 database installs you need to back up, patch and so on every once in a while. And companies value that.
On the pricing bit, I have to say edge driven SQLite/ libsql driven solutions (this is a lot of them) can be a mixed bag.
Cloudflare, Fly.io litestream offerings and Turso are pretty reasonably priced, given the global coverage.
AWS with Aurora is more expensive for sure and isn’t edge located if I recall correctly, so you don’t get near instant propagation of changes on the edge
The bigger thing for me is how much control you have. So far with these edge database providers you don’t have a ton of say in how things are structured. To use them optimally, I have found it works best if you are doing database-per-tenant (or customer) scenarios or using it as a read / write cache that gets exfiltrated asynchronously.
And that is where I believe the real cost factors come into play is the flexibility
Or at least they should. I’ve worked many places where thousands of dollars in engineering hours were wasted on something after they refused to use a service for a fraction of the cost. Some companies understand this but others don’t.
The vast majority of products with paying customers need better availability than “database went down on Friday and I was AFK until Monday, sorry for the 3 day downtime everyone”
The "Wait, what does “SQLite-compatible” actually mean?" subheading didn't answer my question to be honest. They're using (forked) libSQL under the hood - ok, cool. But how do I interface with it?
While in public preview, Bunny Database is free.
When idle, Bunny Database only incurs storage costs. One primary region is charged continuously, while read replicas only add storage costs when serving traffic (metered by the hour).
Reads - $0.30 per billion rows
Writes - $0.30 per million rows
Storage - $0.10 per GB per active region (monthly)
The best thing about their pricing is that you can prepay. So if you have a runaway cost, it can stop before you run up a 5 or 6 figure bill, unlike Azure/AWS/GCP/CF.
Adding my voice to the chorus here: they've established a pattern of introducing new features and never really getting them past the 80% point. No qualms with the CDN; it's a sweet spot among providers. But their other offerings have been frustrating me for years now.
The pattern here is clear: everyone loves the CDN, everyone worries about their track record shipping anything else. A database is the worst place to test whether that changes. CDN mistakes cost you cache misses. Database mistakes cost you data. The pricing model and EU story are genuinely compelling, but if the S3 compatibility timeline is any guide, I'd let this one run quietly for a year before trusting it with anything I can't afford to lose.
It does feel like they're spreading their resources pretty thin though, the S3-compatible interface for their file storage has been "coming soon" since 2022.
S3 is currently in closed preview with some users. It's quite easy to get added for those keen to try it. More using it and providing feedback, the quicker it'll become public preview.
Huh, how? Did you have to modify your site a lot to do switch?
I tried to test it out as a CDN replacement for Cloudflare but the workflow was a lot different. Instead of just using DNS to put it in front of another website and proxy the requests (the "orange cloud" button), I had to upload all the assets to Bunny and then rewrite the URLs in my app. Was kind of a pain
When I tried it last year, their edge compute infra was just not there yet. It could not do any meaningful server-side rendering because of code size, compute and JS standard constraints.
Depending on your precise requirements, I think it might have changed.
I've been trying out Bunny recently and it looks like a very viable replacement for most things I currently do with Cloudflare. This new database fills one of the major gaps.
Their edge scripting is based on Deno, and I think is pretty comparable to e.g. Vercel. They also have "magic containers", comparable to AWS ECS but (I think) much more convenient. It sounds from the docs like they run containers close to the edge, but I don't know if it's comparable to e.g. Lambda@Edge.
I haven’t tried to do SSR in bunny but they also have bunny magic containers now where you run an entire container instead of just edge scripts (but still at the edge).
I have been using them for over a year. THey have the same flow as Cloudflare, point domain to thier CDN, set CDN Pull Zone to target your server. I havent had to do anything.
They even support websockets.
Why they cant do is the TUnnel stuff, or at least fake it. I have ipv6 servers, and I can't have the IPv4 Bunny traffic go to the ipv6 only sources.
Will give this a spin. They’re one of the few cloud-y providers that has both prepayment and a rate limiter that doesn’t charge for rate limit exceeds (still blows my mind that providers charge for blocks).
Same, it's nice to use a no-BS CDN for personal projects (e.g. https://atlasof.space/). Their pricing is good and I actually appreciate that they have no free tier so that there's no "oh shit" moment when you suddenly exceed it and owe real $$$ (looking at you, Netlify). I probably won't use their database feature but I'll for sure keep using their CDN if they can keep things as straightforward as they currently are.
I wonder if they're stretching themselves too thin? Their CDN product is rock solid IME and so is their video streaming, but they've been adding a lot more "developer-platform" type products, seemingly trying to catch up to CF, and I'm not sure I'll ever trust it enough in terms of staying-power to ever commit to the vendor lock-in there. (I wouldn't with Cloudflare either, to be fair)
Reminds me of how we got scarred by "parse.com" -- it was also a promising database, and our customer insisted on it, but after lengthy development and just before our project release turned out that they are shutting down and noone works on it anymore. Like literally their support said "uhm sorry folks, we're all hired by Facebook, no one is working on parse.com anymore".
parse.com was my last straw building on "as a service" startups because of this. DaaS is not even particularly good for hobby projects anymore given how easy it is to work with sqlite.
This documentation page[1] seems pretty clear. One primary at a time, any number of read replicas that automatically proxy writes to the primary, when compute scales to zero the data is in object storage and a new primary can spin up elsewhere.
Per million rows written: Bunny $0.30, Cloudflare $1.00 (first 50M/month free)
Per GB stored: Bunny $0.10/region, Cloudflare $0.75 (5GB free)
Bunny also has a lot better region selection, 41 available vs. Cloudflare's 6 (see https://developers.cloudflare.com/d1/configuration/data-loca...). Even though Bunny charges storage per region used where Cloudflare doesn't, Bunny still comes out cheaper with 7 regions selected. Bunny lets you choose how many and which regions to replicate across; Cloudflare's region replication is an on/off toggle that is in beta and requires you to use "the new Sessions API" (I don't know what this entails).
The main reason I haven't tried out D1 is that it locks you into using Workers to access the database. Bunny says they have an HTTP API.
I plan to stick with VPSes for compute and storage, but I do like seeing someone (other than Amazon) challenge Cloudflare on their huge array of fun toys for devs to play with.
Small companies often have much better technical support than large companies where you just get lost in the system. One of the reasons I moved away from R2 was that it was impossible to contact anyone about the serious issues I had with the product. I’m using Bunny for CDN and have found them to be very responsive.
Not a technical reason, but given Cloudflare's recent business practices where they hold you hostage if you don't upgrade to an enterprise plan are a pretty good reason to avoid imo.
Some ISPs have bad peering with Cloudflare (e.g. Deutsche Telekom). Not Cloudflares fault but it makes it a bad choice if your customers are in Germany.
> Not every project needs Postgres, and that’s okay. Sometimes you just want a simple, reliable database that you can spin up quickly and build on, without worrying it’ll hit your wallet like an EC2.
Isn't the operational burden of SQLite the main selling point over Postgres (not one I subscribe to, but that's neither here nor there)? If it's managed, why do I care if it's SQLite or Postgres? If anything, I would expect Postgres to be the friendlier option, since you won't have to worry about eventually discovering that you actually need some feature even if you don't need it at the start of your project. Maybe there are projects that implement SQLite on top of Postgres so you can gradually migrate away from SQLite if you need Postgres features eventually?
Marek here from bunny.net. We’re not saying SQLite is universally better than Postgres. The trade-off we’re optimizing for is cost model and operational simplicity.
Even as a managed service, Postgres DBaaS still tends to push users into capacity planning, instance tiers, and paying for idle headroom. Using a SQLite-compatible engine lets us offer a truly usage-based model with affordable read replication and minimal idle costs.
turso deprecated their edge replica offering when they went all-in on rewriting sqlite so bunny fills a nice void here. while turso didn't pull the plug for existing customers, new customers were left looking for alternatives with edge replica (that couldn't replicate on-device), and bunny does a nice job here.
turso also made it so your primary had to be in one location, but bunny will move it around based on latency on next boot.
Some European companies migrate their dependencies from US clouds to European ones. Turso is registered in Delaware. Bunny HQ is in Slovenia. Different data related policies apply.
> We can’t wait to have this available as a preview later in Q2 and truly make global storage a breeze, so keep an eye out!
then apologised for missing that in September 2023 [2]
> We initially announced that we were working on S3 support for Bunny Storage all the way back in 2022. Today, as 2023 is slowly coming to an end, many of our customers continue to follow our blog, hoping for good news about the release.
changing the roadmap to early 2024 [2]
> But we are working aggressively toward shipping S3 compatibility in early 2024.
That same post also has the beautiful "At bunny.net, we value transparency." quote. It's early 2026, and they're literally ignoring my support requests asking about what the roadmap is looking like for this now.
So, do not trust their product or leadership at all.
[1] https://bunny.net/blog/introducing-edge-storage-sftp-support... [2] https://bunny.net/blog/whats-happening-with-s3-compatibility...
Asking because I was looking at both Cloudflare and Bunny literally this week...and I feel like I don't know anything about it. Googling for it, with "hackernews" as keyword to avoid all the blogspam, didn't bring up all that much.
(I ended up with Cloudflare and am sure that for my purposes it doesn't matter at all which I choose.)
- The free CDN is basically unusable with my ISP Telekom Germany due to a long-running and well documented peering dispute. This is not necessarily an issue with Cloudflare itself, but means that I have to pay for the Pro plan for every domain if I want to have a functioning site in my home country. The $25 per domain / project add up.
- Cloudflare recently had repeated, long outages that took down my projects for hours at a time.
- Their database offering (D1) had some unpredictable latency spikes that I never managed to fully track down.
- As a European, I'm trying to minimize the money I spent on US cloud services and am actively looking for European alternatives.
However, at the time I did plenty of trace routes to confirm that the Pro plans peering is at least better than the Free plan for the Telekom problem. Free plan would route traffic to NYC and back, while Pro plan traffic terminates in Frankfurt.
> When S3 compatibility is enabled (currently in beta), the number of available replication points is reduced
I assume it's a private beta.
https://docs.bunny.net/storage/storage-tiers#s3-compatibilit...
> This feature is currently in the closed beta stage. It is not available for use currently, but it's expected to be in the near future. We appreciate your interest in it and will mark your ticket so we can notify you when it's available.
Their log delivery api is delayed by over 3 days, despite them promising only "up to 5 minutes delay" in their docs: https://docs.bunny.net/cdn/logging
Why isn't it on the status page you might ask? Oh, that's because a delay is not "critical", but I fear I am losing loglines now, their retention is 3 days.
It's an interesting strategy for them, because it doesn't inspire confidence in me about their other offerings. When they can't reliably operate a log delivery API or be transparent about issues, it's hard to trust them with something as critical as a database.
Hopefully it will be fixed soon.
SQLite is an embedded database. Every Android app for instance gets a db through sqlite for free. You can easily compile it yourself and embed it in pretty much anything that has a CPU.
So you'd like to offload this embeddable database that's nowhere near postgres or even maria, and then it even says:
> Why care about database latency anyway?
Right. Because you can't even begin to think about introducing a network stack between your application and the database unless you can't be bothered all about the latency.
I can't imagine myself a customer for this.
Any Linux distro can have MySQL or Postgres installed in less than five minutes and works out of the box
Even a single core VPS can handle lots of queries per second (assuming the tables are indexed properly and the queries aren't trash)
There are mature open source backup solutions which don't require DB downtime (also available in most package managers)
It's trivial to tune a DB using .conf files (there are even scripts that autotune for you!!!)
Your VPS provider will allow you to configure encryption at rest, firewall rules, and whole disk snapshots as well
And neither MySQL or Postgres ever seem to go down, they're super reliable and stable
Plus you have very stable costs each month
It depends:
- do you want multi region presence
- do you want snapshot backups
- do you want automated replication
- do you want transparent failover
- do you want load balancing of queries
- do you want online schema migrations with millisecond lock time
- do you want easy reverts in time
- do you want minor versions automatically managed
- do you want the auth integrated with a different existing system
- do you want...
There's a lot that hosted services with extra features can give you. You can do everything on the list yourself of course, but it will take time and unless you already have experience, every point can introduce some failure you're not aware of.
I totally agree with that, but in my experience 99% of "application developers" don't need all these features. Of those you listed, I only see "backups" as a requirement. Everything else is just - what I said - features for when your application is successful and you want something streamlined.
I would have concerns around backups (ensuring that your backups are actually working, secure, and reliable seems like potentially time intensive ongoing work).
I also don't think I fully understand what is required in terms of security. Do I now have to keep track of CVEs, and work out what actions I need to in response to each one? You talk about firewall rules. I don't know what is required here either.
I'm sure it's not too hard to hire someone who does know how to do these things, but probably not for anything close to the $50/month or whatever it costs to run a hosted database.
It is not. You can provision a free Postgres instance with a single click: https://neon.new/
I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.
If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.
Sure, any regular SME can just install Postgres or MySQL without even setting much up except with `mysql_secure_install`, a user with a password and an 'app' database. But you may end up with 10-20 database installs you need to back up, patch and so on every once in a while. And companies value that.
Cloudflare, Fly.io litestream offerings and Turso are pretty reasonably priced, given the global coverage.
AWS with Aurora is more expensive for sure and isn’t edge located if I recall correctly, so you don’t get near instant propagation of changes on the edge
The bigger thing for me is how much control you have. So far with these edge database providers you don’t have a ton of say in how things are structured. To use them optimally, I have found it works best if you are doing database-per-tenant (or customer) scenarios or using it as a read / write cache that gets exfiltrated asynchronously.
And that is where I believe the real cost factors come into play is the flexibility
And tell me how easily you can achieve this "out of the box"
If you don't care about business continuity or high availability then everything gets easier
> And neither MySQL or Postgres ever seem to go down, they're super reliable and stable
The box they're on goes down
So? Not everyone needs 99.999999% availability.
Serverless, managed databases and even multicloud won't save you. You'll still have to be on call.
Don't want to be on call? Design your stuff so it works local first.
What is the upgrade path?
How often do they release?
Do I have to worry about CVEs?
Who is doing network security?
Who is testing that security?
Where are my credentials stored?
Do I have a dashboard that tracks the hundreds of resources I'm responsible for including this new one?
> Plus you have very stable costs each month
I'm sick and tired of managing linux boxes. It simply doesn't scale in any reasonable way.
I've been had :(
They don't elaborate, but apparently libSQL has an HTTP API called "Hrana": https://github.com/tursodatabase/libsql/blob/main/docs/HRANA... - if that's what they're exposing, wouldn't it make more sense to call it libSQL-compatible or something?
I was testing IPv6 origin support (they don’t support it), and they billed me $2 for a couple of test requests. I was testing at the end of the month.
With other providers, this would have cost only a few cents.
I tried to test it out as a CDN replacement for Cloudflare but the workflow was a lot different. Instead of just using DNS to put it in front of another website and proxy the requests (the "orange cloud" button), I had to upload all the assets to Bunny and then rewrite the URLs in my app. Was kind of a pain
It's a similar process to Cloudflare. Point the NS to them and enable the proxy for a domain or subdomain.
(don't use CNAME flattening with DNS-routed CDNs like Bunny though, if you must use an apex domain then use the CDNs integrated nameservers)
What is the problem with doing that?
Has this situation changed?
I've been trying out Bunny recently and it looks like a very viable replacement for most things I currently do with Cloudflare. This new database fills one of the major gaps.
Their edge scripting is based on Deno, and I think is pretty comparable to e.g. Vercel. They also have "magic containers", comparable to AWS ECS but (I think) much more convenient. It sounds from the docs like they run containers close to the edge, but I don't know if it's comparable to e.g. Lambda@Edge.
Bunny has a similarity concept: https://bunny.net/edge-scripting/
They even support websockets.
Why they cant do is the TUnnel stuff, or at least fake it. I have ipv6 servers, and I can't have the IPv4 Bunny traffic go to the ipv6 only sources.
Its European rather than from USA so its less dependent on that orange guy in that white/golden house
Just compare the most recent commits from LibSQL: https://github.com/tursodatabase/libsql/commits/main/
To those of SQLite: https://sqlite.org/src/timeline
One of these looks like a healthy and actively maintained project. The other isn't quite dead, but it's limping along.
https://turso.tech/blog/we-will-rewrite-sqlite-and-we-are-go...
If not, it seems like it would be quite a bit of work to implement the synchronization... and I don't understand why one would use it otherwise.
[1]: https://docs.bunny.net/database/replication
It looks like there might be issues in Italy too.
In addition to the other points brought up, it looks like pricing strongly favors Bunny once you're outside of Cloudflare's free tier.
Per billion rows read: Bunny $0.30, Cloudflare $1.00 (first 25B/month free)
Per million rows written: Bunny $0.30, Cloudflare $1.00 (first 50M/month free)
Per GB stored: Bunny $0.10/region, Cloudflare $0.75 (5GB free)
Bunny also has a lot better region selection, 41 available vs. Cloudflare's 6 (see https://developers.cloudflare.com/d1/configuration/data-loca...). Even though Bunny charges storage per region used where Cloudflare doesn't, Bunny still comes out cheaper with 7 regions selected. Bunny lets you choose how many and which regions to replicate across; Cloudflare's region replication is an on/off toggle that is in beta and requires you to use "the new Sessions API" (I don't know what this entails).
The main reason I haven't tried out D1 is that it locks you into using Workers to access the database. Bunny says they have an HTTP API.
I plan to stick with VPSes for compute and storage, but I do like seeing someone (other than Amazon) challenge Cloudflare on their huge array of fun toys for devs to play with.
And Cloudflare is an american company.
Isn't the operational burden of SQLite the main selling point over Postgres (not one I subscribe to, but that's neither here nor there)? If it's managed, why do I care if it's SQLite or Postgres? If anything, I would expect Postgres to be the friendlier option, since you won't have to worry about eventually discovering that you actually need some feature even if you don't need it at the start of your project. Maybe there are projects that implement SQLite on top of Postgres so you can gradually migrate away from SQLite if you need Postgres features eventually?
Even as a managed service, Postgres DBaaS still tends to push users into capacity planning, instance tiers, and paying for idle headroom. Using a SQLite-compatible engine lets us offer a truly usage-based model with affordable read replication and minimal idle costs.
Nah, who am I kidding. I left.
(Context: <https://xkcd.com/1871/>.)
turso also made it so your primary had to be in one location, but bunny will move it around based on latency on next boot.
Also, not sure about now, but historically Turso didn't have to best uptime.