66 comments

  • hackingonempty 4 hours ago
    > The enterprise mindset dictates that you need an out-of-process database server. But the truth is, a local SQLite file communicating over the C-interface or memory is orders of magnitude faster than making a TCP network hop to a remote Postgres server.

    I don't want to diss SQLite because it is awesome and more than adequate for many/most web apps but you can connect to Postgres (or any DB really) on localhost over a Unix domain socket and avoid nearly all of the overhead.

    It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.

    I don't think running Postgres on the same box as your app is the same class of optimistic over provisioning as setting up a kubernetes cluster.

    • andersmurphy 2 hours ago
      Sqlite smokes postgres on the same machine even with domain sockets [1]. This is before you get into using multiple sqlite database.

      What features postgres offers over sqlite in the context of running on a single machine with a monolithic app? Application functions [2] means you can extend it however you need with the same language you use to build your application. It also has a much better backup and replication story thanks to litestream [3].

      - [1] https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

      - [2] https://sqlite.org/appfunc.html

      - [3] https://litestream.io/

      The main problem with sqlite is the defaults are not great and you should really use it with separate read and write connections where the application manages the write queue rather than letting sqlite handle it.

      • locknitpicker 1 hour ago
        > Sqlite smokes postgres on the same machine even with domain sockets [1].

        SQLite on the same machine is akin to calling fwrite. That's fine. This is also a system constraint as it forces a one-database-per-instance design, with no data shared across nodes. This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS and you need to serve traffic beyond your local region then you have no alternative other than to have more than one instance of your service running in parallel. You can continue to shoehorn your one-database-per-service pattern onto the design, but you're now compelled to find "clever" strategies to sync state across nodes.

        Those who know better to not do "clever" simply slap a Postgres node and call it a day.

        • andersmurphy 1 hour ago
          > SQLite on the same machine is akin to calling fwrite.

          Actually 35% faster than fwrite [1].

          > This is also a system constraint as it forces a one-database-per-instance design

          You can scale incredibly far on a single node and have much better up time than github or anthropic. At this rate maybe even AWS/cloudflare.

          > you need to serve traffic beyond your local region

          Postgres still has a single node that can write. So most of the time you end up region sharding anyway. Sharding SQLite is straight forward.

          > This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS

          It's actually pretty good for running a real time multiplayer app with a billion datapoints on a 5$ VPS [2]. There's nothing clever going on here, all the state is on the server and the backend is fast.

          > but you're now compelled to find "clever" strategies to sync state across nodes.

          That's the neat part you don't. Because, for most things that are not uplink limited (being a CDN, Netflix, Dropbox) a single node is all you need.

          - [1] https://sqlite.org/fasterthanfs.html

          - [2] https://checkboxes.andersmurphy.com

        • rpdillon 1 hour ago
          I wonder what percentage of services run on the Internet exceed a few hundred transactions per second.
          • icedchai 24 minutes ago
            I’ve seen multimillion dollar “enterprise” projects get no where close to that. Of course, they all run on scalable, cloud native infrastructure costing at least a few grand a month.
          • egwor 1 hour ago
            I think the better question to ask is what services peak at a few hundred transactions per second?
    • eurleif 3 hours ago
      Looks like the overhead is not insignificant:

          Running 100,000 `SELECT 1` queries:
          PostgreSQL (localhost): 2.77 seconds
          SQLite (in-memory): 0.07 seconds
      
      (https://gist.github.com/leifkb/1ad16a741fd061216f074aedf1eca...)
      • piker 3 hours ago
        I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries. If you’re doing CRUD, simulating that workflow may favor Postgres given the transactional read/write work that needs to take place across multiple concurrent connections.
        • locknitpicker 1 hour ago
          > I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries.

          Exactly. Back in the real world,anyone who is faced with that sort of usecase will simply add memory cache and not bother with the persistence layer.

      • bob1029 3 hours ago
        This is mostly about thread communication. With SQLite you can guarantee no context switching. Postgres running on the same box gets you close but not all the way. It's still in a different process.
        • andersmurphy 43 minutes ago
          This. Run an app on the same box as PG and you can easily be plagued by out of memory etc (as there's memory contention between the two processes).
      • madduci 2 hours ago
        Most important is that that local SQLite gets proper backups, so a restore goes without issues
      • vixalien 2 hours ago
        Would be nice to see PGLite[1] compared too

        1: https://pglite.dev/

      • locknitpicker 2 hours ago
        A total performance delta of <3s on ~300k transactions is indeed the definition of irrelevant.

        Also:

        > PostgreSQL (localhost): (. .) SQLite (in-memory):

        This is a rather silly example. What do you expect to happen to your data when your node restarts?

        Your example makes as much sense as comparing Valkey with Postgres and proceed to proclaim that the performance difference is not insignificant.

      • iLoveOncall 2 hours ago
        Why are you comparing PostgreSQL to an in-memory SQLite instead of a file-based one? Wow, memory is faster than disk, who would have thought?
        • eurleif 2 hours ago
          Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:

              Running 100,000 `SELECT 1` queries:
              PostgreSQL (localhost): 2.71 seconds
              SQLite (in-memory): 0.07 seconds
              SQLite (tempfile): 0.07 seconds
          
          (https://gist.github.com/leifkb/d8778422d450d9a3f103ed43258cc...)
          • oldsecondhand 2 hours ago
            Why are you doing meaningless microbenchmarks?
          • locknitpicker 1 hour ago
            > Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:

            I hope you understand that your claim boils down to stating that SQLite is faster at doing nothing at all, which is a silly case to make.

            • eurleif 1 hour ago
              The original claim being discussed is about the overhead of an in-process database vs. a database server in a separate process, not about whether SQLite or PostgreSQL have a faster database engine.
      • stavros 1 hour ago
        It is insignificant if you're doing 100k queries per day, and you gain a lot for your 3 extra seconds a day.
      • Izmaki 2 hours ago
        What a useful "my hello-world script is faster than your hello-world script" example.
    • usernametaken29 2 hours ago
      I have used SQLite with extensions in extreme throughput scenarios. We’re talking running through it millions of documents per second in order to do disambiguation. I won’t say this wouldn’t have been possible with a remote server, but it would have been a significant technical challenge. Instead we packed up the database on S3, and each instance got a fresh copy and hammered away at the task. SQLite is the time tested alternative for when you need performance, not features
    • jampekka 3 hours ago
      > It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.

      Isn't this idea to spend a bit more effort and overhead to get YAGNI features exactly what TFA argues against?

    • jbverschoor 2 hours ago
      I've been doing that for decades.. People seem to simply not know about unix architecture.

      What I like about sqlite is that it's simply one file

    • weego 1 hour ago
      Thats just swapping another enterprise focused concern into the mix. Your database connection latency is absolutely not a concerning part of your system.
    • dizhn 3 hours ago
      Author's own 'auth' project works with sqlite and postgres.
    • Jolter 3 hours ago
      I mean, you’re not wrong about the facts, but it’s also pretty trivial to migrate the data from SQLite into a separate Postgres server later, if it turns out you do need those features after all. But most of the time, you don’t.
      • pdhborges 3 hours ago
        I bet that takes more time than the 5 extra minutes you take to setup Postgres in the same box upfront.
    • lichenwarp 1 hour ago
      ORDERS OF MAGNITUDE NEWS
    • direwolf20 1 hour ago
      IIRC TCP/IP through localhost actually benchmarked faster than Unix sockets because it was optimized harder. Might've been fixed now. Unix sockets gives you the advantage of authentication based on the user ID of who's connecting.

      My experience with sqlite for server-based apps has been that as your app grows, you almost always eventually need something bigger than sqlite and need to migrate anyway. For a server-based app, where minimizing deployment complexity isn't an extremely important concern, and with mixed reads and writes, it's rarely a bad idea to use Postgres or MariaDB from the start. Yes there are niche scenarios where sqlite on the server might be better, but they're niche.

  • senko 3 hours ago
    If this sounds like basic advice, consider there are a lot of people out there that believe they have to start with serverless, kubernetes, fleets of servers, planet-scale databases, multi-zone high-availability setups, and many other "best practices".

    Saying "you can just run things on a cheap VPS" sounds amateurish: people are immediately out with "Yeah but scaling", "Yeah but high availability", "Yeah but backups", "Yeah but now you have to maintain it" arguments, that are basically regurgitated sales pitches for various cloud platforms. It's learned helplessness.

    • kandros 25 minutes ago
      “Cloud-native natives” had so much free plans that had no need to understand what a basic app really needs.
    • ramraj07 2 hours ago
      Apparently the phrase cargo cult software engineering is not common anymore. Explains these things perfectly.
      • rcbdev 2 hours ago
        I end up explaining this term to every junior developer that doesn't know it sooner or later, the same way I explain bike shedding to all PMs that don't know it... often sooner, rather than later.

        It seems to really help if you can put a term to it.

      • throwatdem12311 11 minutes ago
        Heh, I was gonna say cargo cult might mean something different in today’s programming landscape but then I thought about it for a second and it actually reinforces th meaning.
    • InfraScaler 2 hours ago
      I don't know what to say. People keep saying these engineers exist and here I am not having seen a single, and I follow many indie hackers communities.
      • dwedge 1 hour ago
        A devops coworker found my blog and asked me how I host it, is it Kubernetes. I told him it's a dedicated server and he seemed amazed. And this was just a blog. It's real
        • InfraScaler 1 hour ago
          Does your coworker run a blog on k8s?
          • dwedge 1 hour ago
            None of them self host anything at all. It's like that skill was totally skipped. But they advise and consult on infra
            • Hnrobert42 49 minutes ago
              Well, by the time you are hiring a dedicated infra role, you should be past the single VPS stage.
              • MontyCarloHall 8 minutes ago
                Hard disagree. You can have incredibly complex infrastructure (whose maintenance necessitates an FTE) running on a single VPS. Some of the smartest infra consultants I've met have point blank said "you shouldn't implement a fleet of cloud microservices for that, I can implement this whole stack on a single machine if I'm efficient about the architecture and underlying algorithms."
              • dwedge 33 minutes ago
                My point is that none of these coworkers have ever been at that stage. He was surprised about me hosting something because he seems to think hosting is expensive and for companies. Straight in at the top end of k8s and microservices
      • Dumbledumb 1 hour ago
        Because I think precisely the indie hacker community is not as keen to default to the big-tech stacks, because those are neither indie, nor hack-y :)
  • KronisLV 3 hours ago
    > I use Linode or DigitalOcean. Pay no more than $5 to $10 a month. 1GB of RAM sounds terrifying to modern web developers, but it is plenty if you know what you are doing.

    If you get one dedicated server for multiple separate projects, you can still keep the costs down but relax those constraints.

    For example, look at the Hetzner server auction: https://www.hetzner.com/sb/

    I pay about 40 EUR a month for this:

      Disk: 736G / 7.3T (11%)
      CPU: Intel Core i7-7700 @ 8x 4.2GHz [42.0°C]
      RAM: 18004MiB / 64088MiB
    
    I put Proxmox on it and can have as many VMs as the IO pressure of the OSes will permit: https://www.proxmox.com/en/ (I cared mostly about storage so got HDDs in RAID 0, others might just get a server with SSDs)

    You could have 15 VMs each with 4 GB of RAM and it would still come out to around 2.66 EUR per month per VM. It's just way more cost efficient at any sort of scale (number of projects) when compared to regular VPSes, and as long as you don't put any trash on it, Proxmox itself is fairly stable, being a single point of failure aside.

    Of course, with refurbished gear you'd want backups, but you really need those anyways.

    Aside from that, Hetzner and Contabo (opinions vary about that one though) are going to be more affordable even when it comes to regular VPS hosting. I think Scaleway also had those small Stardust instances if you want something really cheap, but they go out of stock pretty quickly as well.

    • compounding_it 1 hour ago
      What do you do about ipv4 ? Do you also use a routing VM to manage all that ?

      It’s very interesting how people rent large VMs with a hypervisor. I’m wondering if licenses for VPS have any clauses preventing this for commercial scale.

      • KronisLV 20 minutes ago
        Hetzner has some docs: https://docs.hetzner.com/robot/dedicated-server/ip/additiona...

        Since I only needed about 3 VMs (though each being a bit beefier, running containers on them, a web server sitting in front of those with vhosts as ingress), I could give each VM its own IPv4 address and it didn’t end up being too expensive for my use case. Would be a bit different for someone who wants many small VMs.

  • f311a 3 hours ago
    There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM. You can use it for caches or a database that supports concurrent writes. The $15 difference won’t make any financial difference if you are trying to run a small business.

    Thinking about on how to fit everything on a $5 VPS does not help your business.

    • jampekka 3 hours ago
      $15 is not exactly zero, is it? If you don't need more than 1GB, why pay anything for more than 1GB?

      I recall running LAMP stacks on something like 128MB about 20 years ago and not really having problems with memory. Most current website backends are not really much more complicated than they were back then if you don't haul in bloat.

      • elAhmo 1 hour ago
        Saving 15 USD on 10k+ USD MMR is ridiculous.
        • compounding_it 1 hour ago
          Given how much revenue depends on the experience of a web app and loading times, I’d be happy to pay 100$ a month on that revenue if I don’t have to sacrifice a second of additional loading time no matter how clever I was optimizing it.
        • cbdevidal 1 hour ago
          Saving 15 USD on 0 USD MMR while still building the business is priceless. Virtually infinite runway.
      • bdelmas 3 hours ago
        It is. With 10k MRR it represents 0.15% of the revenue. Having the whole backend costing that much for a company selling web apps is like it’s costing zero.
        • jvuygbbkuurx 1 hour ago
          You probably don't make 10k MMR on day one. If you make many small apps, it can make sense to learn how to run things lean to have 4x longer runway per app.
      • kaliqt 3 hours ago
        There’s a happy medium and $5 for 1GB RAM just isn’t it.
        • cbdevidal 1 hour ago
          Be sure to inform the author of the article who is currently making money on his 1GB VPS that he hasn’t found a happy medium
        • lijok 2 hours ago
          Not a very strong argument now is it?
          • pas 2 hours ago
            if the project already has positive revenue then arguably the ability to capture new users is worth a lot, which requires acceptable performance even when a big traffic surge is happening (like a HN hug of attention)

            if the scalability is in the number of "zero cost" projects to start, then 5 vs 15 is a 3x factor.

    • 100ms 2 hours ago
      NVME read latency is around 100usec, a SQLite3 database in the low terabytes needs somewhere between 3-5 random IOs per point lookup, so you're talking worst case for an already meaningful amount of data about 0.5ms per cold lookup. Say your app is complex and makes 10 of these per request, 5 ms. That leaves you serving 200 requests/sec before ever needing any kind of cache.

      That's 17 million hits per day in about 3.9 MiB/sec sustained disk IO, before factoring in the parallelism that almost any bargain bucket NVME drive already offers (allowing you to at least 4x these numbers). But already you're talking about quadrupling the infrastructure spend before serving a single request, which is the entire point of the article.

      • 100ms 27 minutes ago
        Rereading this, I have no idea where 3.9 MiB/sec came from, that 200 requests/sec would be closer to 8 MiB/sec
      • f311a 2 hours ago
        You won't get such numbers on a $5 VPS, the SSDs that are used there are network attached and shared between users.
        • 100ms 1 hour ago
          Not quite $5, but a $6.71 Hetzner VPS

              # ioping -R /dev/sda
          
              --- /dev/sda (block device 38.1 GiB) ioping statistics ---
              22.7 k requests completed in 2.96 s, 88.8 MiB read, 7.68 k iops, 30.0 MiB/s
              generated 22.7 k requests in 3.00 s, 88.8 MiB, 7.58 k iops, 29.6 MiB/s
              min/avg/max/mdev = 72.2 us / 130.2 us / 2.53 ms / 75.6 us
    • nlitened 1 hour ago
      > There are zero reasons to limit yourself to 1GB of RAM

      There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, and instead to focus on generating business value to customers and getting more paying customers. I think it’s what many engineers are keen to overlook behind fun technical details.

      • locknitpicker 1 hour ago
        > There is a good reason: teaching yourself not to over-engineer, over-provision, or overthink, (...)

        This is specious reasoning. You don't prevent anything by adding artificial constraints. To put things in perspective, Hetzner's cheapest vCPU plan comes with 4GB of RAM.

        • sgarland 13 minutes ago
          If I give you a box with 1 GiB of RAM, you are literally forced to either optimize your code to run in it, or accept the slowdown from paging. How is this specious?
    • AussieWog93 2 hours ago
      Or better yet, go with a euro provider like Hetzner and get 8GB of RAM for $10 or so. :)

      Even their $5 plan gives 4GB.

      • arcanemachiner 1 hour ago
        They also have servers in the US (east and west coast).
    • littlecranky67 2 hours ago
      I think we have to re-think and re-evaluate RAM usage on modern systems that use swapping with CPU-assisted page compression and fast, modern NVMe drives.

      The Macbook Neo with 8GB RAM is a showcase of how people underistimated its capabilities due to low amount of RAM before launch, yet after release all the reviewers point to a larger set of capabilities without any issues that people didn't predict pre-launch.

      • f311a 2 hours ago
        $5 VPS disks are nowhere near macbooks, they are shared between users and often connected via network. They don't seat close to CPU.
      • sgt 2 hours ago
        Also, macOS is generally exceptional at caching and making efficient use of the fast solid state chips.
    • afro88 3 hours ago
      It doesn't look like they think about how to make it fit though. They just use a known good go template
    • TiredOfLife 2 hours ago
      Hetzner, OVH and others offer 4-8gb and 2-4 cores for the same ~5$
  • gobdovan 4 hours ago
    Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.

    One note: you can absolutely use Python or Node just as well as Go. There's Hetzner that offers 4GB RAM, 10TB network (then 1$/TB egress), 2CPUs machines for 5$.

    Two disclaimers for VPS:

    If you're using a dedicated server instead of a cloud server, just don't forget to backup DB to a Storage box often (3$ /mo for 1TB, use rsync). It's a good practice either way, but cloud instances seem more reliable to hardware faults. Also avoid their object store.

    You are responsible for security. I saw good devs skipping basic SSH hardening and get infected by bots in <1hr. My go-to move when I spin up servers is a two-stage Terraform setup: first, I set up SSH with only my IP allowed, set up Tailscale and then shutdown the public SSH IP entrypoint completely.

    Take care and have fun!

    • t_mahmood 3 hours ago
      About security, wall of shame story,

      Once I had Postgresql db with default password on a new vps, and forgetting to disable password based login, on a server with no domain. And it got hacked in a day, and was being used as bot server. And that was 10 years ago.

      Recently deployed server, and was getting ssh login attempts within an hour, and it didn't had a domain. Fortunately, I've learned my lesson, and turned of password based login as soon as the server was up and running.

      And similar attempts bogged down my desktop to halt.

      Having an machine open to the world is now very scary. Thanks God for service like tailscale exists.

      • dwedge 1 hour ago
        Nothing would happen, ssh is designed to be open to the world. Using tailscale or a vpn to hide your IP is fine, but using tailscale ssh maybe not.
    • selcuka 4 hours ago
      > Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.

      Funny you said that. I migrated an old, Django web site to a slightly more modern architecture (docker compose with uvicorn instead of bare metal uWSGI) the other day, and while doing that I noticed that it doesn't need PostgreSQL at all. The old server had it already installed, so it was the lazy choice.

      I just dumped all data and loaded it into an SQLite database with WAL and it's much easier to maintain and back up now.

      • gobdovan 3 hours ago
        Yep, it literally is a one-file backup. And runtime it's so much faster for apps where write serialisation is acceptable.
    • dwedge 1 hour ago
      I need more info about devs getting infected over ssh in less than an hour. Unless they had a comically weak root password or left VNC I don't believe it at all
    • egwor 1 hour ago
      First step is to get ssh setup correctly, and second step is to enable a firewall to block incoming connections on everything except the key ports (ssh but on a different port/web/ssl). This immediately eliminates a swathe of issues!
    • InfraScaler 2 hours ago
      Does WAL really offer multiple concurrent writers? I know little about DBs and I've done a couple of Google searches and people say it allows concurrent reads while a write is happening, but no concurrent writers?

      Not everybody says so... So, can anyone explain what's the right way to think about WAL?

      • gobdovan 1 hour ago
        No, it does not allow concurrent writes (with some exceptions if you get into it [0]). You should generally use it only if write serialisation is acceptable. Reads and writes are concurrent except for the commit stage of writes, which SQLite tries to keep short but is workload- and storage-dependent.

        Now this is more controversial take and you should always benchmark on your own traffic projections, but:

        consider that if you don't have a ton of indexes, the raw throughput of SQLite is so good that on many access patterns you'd already have to shard a Postgres instance anyway to surpass where SQLite single-write limitation would become the bottleneck.

        [0] https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...

      • pixelesque 1 hour ago
        No it doesn't - it allows a single writer and concurrent READs at the same time.
        • InfraScaler 1 hour ago
          Thanks! even I run a sqlite in "production" (is it production if you have no visitors?) and WAL mode is enabled, but I had to work around concurrent writes, so I was really confused. I may have misunderstood the comments.
          • yomismoaqui 1 hour ago
            Writes are super fast in SQLite even if they are not concurrent.

            If you were seeing errors due to concurrent writes you must adjust BUSY_TIMEOUT

    • asymmetric 2 hours ago
      > Also avoid their object store.

      Curious as to why you say this. I’m using litestream to backup to Hetzner object storage, and it’s been working well so far.

      I guess itt’s probably more expensive than just a storage box?

      Not sure but I also don’t have to set up cron jobs and the like.

      • gobdovan 1 hour ago
        Historical reliability and compatibility. They claimed they were S3 compatible, but they were requiring deprecated S3 SDKs, plus S3 advanced features are unimplemented (but at least they document it [0]). There was constant timeouts for object creation and updates, very slow speeds and overall instability. Even now, if you check out r/hetzner on reddit, you'll see it's a reliability nightmare (but take it with a grain of salt, nobody reports lack of problems). Not as relevant for DB backups, but billing is dumb, even if you upload a 1KB file, they charge you for 64KB.

        At least with Storage Box you know it's just a dumb storage box. And you can SSH, SFTP, Samba and rsync to it reliably.

        [0] https://docs.hetzner.com/storage/object-storage/supported-ac...

    • nurgalive 48 minutes ago
      When creating a VPS on Hetzner, it lets you by default to configure the key auth only.
  • t_mahmood 3 hours ago
    SQLite is fine, but I have ran Postgresql on a $20 server without any issues, and I would suggest if you have to deal with concurrent users and tasks, Postgresql is the way to go. SQLite WAL works, but sometimes it caused some issues, when you have a lot of concurrent tasks running continuously.

    And, not sure I'm correct, but I felt Postgresql has more optimized storage if you have large text data than SQLite, at least for me I had storage full with SQLite, but same application on Postgresql never had this issue

  • p4bl0 4 hours ago
    Just in case, if there are others like me who where wondering what does "MRR" means, it seems to be "monthly recurring revenue".
    • balgg 3 hours ago
      There is also ARR which is "annual recurring revenue" and you should know that when people use ARR they usually are just making up numbers based on their current MRR (so lying). I've seen people announce their ARR after running their business for two whole months!
      • jwr 1 hour ago
        That's not really "lying" — ARR is usually understood as your projected "Annual Run Rate". It's a useful metric, as long as it is understood that it is an estimate.

        But, in all honesty, all RR numbers are estimates. MRR is also a "made up number" from a certain point of view: it is not equivalent to cash received every month, because of annual subscriptions, cancelations, etc.

        • balgg 26 minutes ago
          >But, in all honesty, all RR numbers are estimates.

          Sure, but I would expect you to have at least one data point or at least near it, before making any estimates for that timescale. I don't see many people make MRR projections based on 2 days of of sales, it's just something I've noticed with startups and ARR.

      • rpdillon 1 hour ago
        Rather than lying, I think of it more as financial dead reckoning.
    • weird-eye-issue 4 hours ago
      [flagged]
      • blitzar 3 hours ago
        Obviously they are lacking the sigma hustle grindset.

        Its like not having syphilis or cancer, its a good thing.

        • jofzar 2 hours ago
          They haven't also worked at a company where the meetings have MRR said like every 4 seconds. I'm so jealous of them
        • weird-eye-issue 3 hours ago
          Says the guy with almost 5k HN comments in less than 5 years
          • blitzar 2 hours ago
            I try to limit it to just 1-2 comments after my 4am ice facial and then no more than 4 comments while I am having my 3pm youth blood infusion.

            Consistency is key for the grindset.

            • weird-eye-issue 2 hours ago
              [flagged]
              • blitzar 2 hours ago
                Nothing some Ayahuasca and a trip to Joshua tree (make sure you do it in that order) wont fix.
                • weird-eye-issue 1 hour ago
                  In what order do you recommend I fit that into my Burning Man itinerary?
      • chii 4 hours ago
        Not everybody who reads HN is well versed in business/entrepreneur oriented jagon.
        • vasco 4 hours ago
          HN means HackerNews btw, for those 15 year accounts that don't know the jargon
        • weird-eye-issue 4 hours ago
          Yes. Clearly. But is the irony really lost on you?
      • p4bl0 3 hours ago
        Haha ^^'.

        Honestly, yes. I'm on HN for tech content, I don't really care about startups and the business side of things, even though sometimes there are interesting reads on this side as well. Also, it may very well be the case that I rediscover the meaning of MRR for the second or third time in sixteen years :).

        • jofzar 2 hours ago
          I'm jealous of you, like seriously, you somehow haven't worked at a company where a C suite says MRR like every 5th sentence in meeting.
      • toong 4 hours ago
        I was about to say: welcome to HN
  • vxsz 4 hours ago
    I learned nothing. Most of this seems like common basic advice, wrapped up in AI written paragraphs...

    Initially from the title, I thought it would be about brainstorming and launching a successful idea, and that sort of thing.

    • gobdovan 4 hours ago
      Usually when there's "on a [low] $/mo" you'll hear basic advice. You'd be surprised to find out many folks are not aware of this!
      • senko 2 hours ago
        Well, there's also the "How we saved $10M/mo by actually paying attention to indexes" trope.
    • mettamage 3 hours ago
      If you feel like it: start a blog! You have knowledge that you consider basic and a certain other subset of the population is interested in it and doesn't know it exists.
    • anana_ 2 hours ago
      > Sometimes you need the absolute cutting-edge reasoning of Claude 3.5 Sonnet or GPT-4o

      Dead giveaway

      • senko 2 hours ago
        Maybe it's tongue-in-cheek.
        • anana_ 1 hour ago
          Upon rereading, I'd agree. Fits with the tone of the rest of the write up.
    • carabiner 4 hours ago
      I think it's good. I've definitely seen resource inflation exactly that OP is alluding to in enterprise. A desire to have some huge cloud based solution with AWS, spark bla bla when a python script with pandas in a cron job was faster.
    • Aerolfos 3 hours ago
      Not only that, his whole business model seems to be "profit off the AI bubble and get the big techs to indirectly subsidize you"

      Which obviously works, it's not like there aren't tons of multi-million startups ultimately doing the exact same thing, and yet. It feels a bit... trite?

  • yoaso 43 minutes ago
    I'm taking the opposite approach - managed services all the way, and my monthly infrastructure costs are higher than what's described here.

    No regrets. Infrastructure isn't the problem I'm trying to solve. The problem is: who's actually going to pay for this?

    Optimizing infrastructure before you have customers is like designing a kitchen before you've written the menu. I launched within 72 hours of starting development and went straight to customer validation. The market feedback started coming in immediately.

    Infrastructure costs show up in your bill. The cost of slow customer validation doesn't show up anywhere - until it's too late. That's the number I watch.

    • em-bee 2 minutes ago
      which approach works better depends on your financial situation and your existing setup. if you have money you can invest, then your approach works. if you have more time than money then invest the time instead. when you have built up your servers over the years, when building a new product, you can also do it quickly because the services you need are already running, and firing up a new database or a new server takes just as long as it takes to set up a managed service. but it doesn't add any cost.
    • sgarland 7 minutes ago
      It doesn’t sound like OP was optimizing anything; it sounds like they just knew how to use that stack, and so are able to get customer validation while also spending very little per month.
  • aleda145 4 hours ago
    Great stack! I'm doing a similar approach for my latest project (kavla.dev) but using fly.io and their suspend feature.

    Scaling to zero with database persistence using litestream has cut my bill down to $0.1 per month for my backend+database.

    Granted I still don't have that many users, and they get 200ms of extra latency if the backend needs to wake up. But it's nice to never have to worry about accidental costs!

    • afro88 3 hours ago
      This is a really nice setup for side projects and random ideas too. Thanks for sharing!
  • pdimitar 34 minutes ago
    I do appreciate the technical simplicity argument and I'm always advocating for it. And the few neat tricks i.e. Copilot.

    That being said, I'd much rather read a few ideas for good recurring passive income. Instead, the author kind of flexes on that, then says "I get refused VC money because they don't see how their money would be useful for me" -- which is one more flex -- and moves on to the technical bits.

    It's coming across as bragging to me.

  • cmiles8 17 minutes ago
    The biggest risk to cloud revenues is that everyone wakes up and realizes they could slash their cloud bills by 60+% quite quickly with just some minimal leaning.
  • wg0 1 hour ago
    Anyone doing per tenant database with SQLITE + Litestream? Please share your experiences and pain points. I know migrations are one. The other challenge is locating the correct database from incoming request. What else?
  • brkn 3 hours ago
    The text feels incoherent to me and lacks some nuance.

    It starts about cutting costs by the choice of infrastructure and goes further to less resource hungry tools and cheaper services. But never compares the cost of these things. Do I save actually the upgrade to a bigger server by using Go and sqlite over let's say Python and postgres? Or does it not even matter when you have just n many users. Then I do not understand why at one point the convenience of using OpenRouter is preferred over managing multiple API keys, when that should be cheaper and a cost point that could increase faster than your infrastructure costs.

    There are some more points, but I do not want to write a long comment.

    • stephbook 2 hours ago
      It actually starts with a completely unrelated anecdote:

      "What do you even need funding for?"

      I agree. The author claims to have multiple $10K MRR websites running on $20 costs. I also don't understand what he needs money for — shouldn't the $x0,000 be able to fund the $20 for the next project? It doesn't make any sense at all.

      Then the author trails off and tells us how he runs on $20/month.

      Well, why did you apply for funding? Hello?

      • chiefalchemist 1 hour ago
        Just because you start this lean doesn’t mean you should stay that way. Perhaps he’s now spending too much time managing his stack and not enough time on product development, customer service, a/o growth.

        In other words, what gets you to $10k MRR isn’t the same thing(s) for 2x, 5x, or 10x that.

  • taffydavid 1 hour ago
    > I bought a GitHub Copilot subscription in 2023, plugged it into standard VS Code, and never left. I tried Cursor and the other fancy forks when they briefly surpassed it with agentic coding, but Copilot Chat always catches up.

    > Here is the trick that you might have missed: somehow, Microsoft is able to charge per request, not per token. And a "request" is simply what I type into the chat box. Even if the agent spends the next 30 minutes chewing through my entire codebase, mapping dependencies, and changing hundreds of files, I still pay roughly $0.04.

    > The optimal strategy is simple: write brutally detailed prompts with strict success criteria (which is best practice anyway), tell the agent to "keep going until all errors are fixed," hit enter, and go make a coffee while Satya Nadella subsidizes your compute costs.

    Wow. I'll definitely be investigating this!

    • estetlinus 1 hour ago
      The author refers to gpt 4o and sonnet 3.5 as SOTA. I’d take the AI tips with a grain of salt tbh. But I’d love it if it’s true
    • taffydavid 1 hour ago
      Thanks for the downvote kind stranger. Not sure what I said to qualify
  • ianpurton 4 hours ago
    When he switches from Kubernetes in the cloud to Nginx -> App Binary -> Sqlite he trades operations functionality for cost.

    But, actually you can run Kubernetes and Postgres etc on a VPS.

    See https://stack-cli.com/ where you can specify a Supabase style infra on a low cost VPS on top of K3s.

    • Jolter 3 hours ago
      I think his argument is that the functionality is unnecessary. You don’t need dynamic service scaling because your single-instance service has such high capacity to begin with.

      I guess it’s all about knowing when to re-engineer the solution for scale. And the answer is rarely ”up front”.

      • ianpurton 2 hours ago
        Dynamic scaling is not really even available on a single node kubernetes.

        I was thinking more of

        Running multiple websites. i.e. 1 application per namespace. Tooling i.e. k9s for looking at logs etc. Upgrading applications etc.

        • sgarland 3 minutes ago
          Namespaces exist in Linux [0], they weren’t invented by K8s.

          You can view application logs with anything that can read a text file, or journalctl if your distro is using that.

          There are many methods of performing application upgrades with minimal downtime.

          0: https://www.man7.org/linux/man-pages/man7/namespaces.7.html

  • Gooblebrai 2 hours ago
    I know this article is about the stack, but I'd like to point out that the success of the author has probably more to do with their marketing/sales strategy than their choice of technical infrastructure.

    Something to remind to many tech folks on HN

    • chiefalchemist 2 hours ago
      True. But he’s able to do marketing because he has the money, time and sense of priorities to do so.

      The moral of the story is: Don’t be (another) fool, your tech stack is not your priority.

  • nullorempty 5 minutes ago
    Eh-trade.ca eh? The name spells the exit strategy this is seeking. Awesome idea and a great execution. Vertical scaling will take this simple setup far and probably far enough.
  • thibaultmol 4 hours ago
    Pretty sure this is just written by AI... Why else would someone call "Sonnet 3.5 Sonnet and gpt 4o' high end models.
    • edu 3 hours ago
      Yep. It made me go check the date of publishing thinking it was published on 2023
    • gverrilla 23 minutes ago
      are they not high-end?
  • jmward01 3 hours ago
    The basic premise, try to be lean, is a good one. The implementation will clearly be debated with everyone having their own opinion on it but the core point is sound. I'd argue a different version of this though: keeping things lean forces simplicity and focus which is incredibly important early on. I have stepped into several startups and seen a mess of old/broken/I don't know what it does so leave it/etc etc. All of that, beyond the cost, slows you down because of the complexity. Regular gardening of your tech stack matters and has a lot of benefits.
  • 44za12 3 hours ago
    I read it as an article in defence of boring tech with a fancier/clickbaity title.

    Here’s the more honest one i wrote a while back:

    https://aazar.me/posts/in-defense-of-boring-technology

    • dvfjsdhgfv 3 hours ago
      While I agree with your points, this one could be more nuanced:

      > Infrastructure: Bare Server > Containers > Kubernetes

      The problem with recommending a bare server first is that bare metal fails. Usually every couple of years a component fails - a PSU, a controller, a drive. Also, a bare metal server is more expensive than VPS.

      Paradoxically, a k3s distro with 3 small nodes and a load balancer at Hetzner may cost you less than a bare metal server and will definitely give you much better availability in the long run, albeit with less performance for the same money.

  • sourcecodeplz 47 minutes ago
    I do it even more simpler: build in PHP and webhosting from Hetzner. All managed: email, sub-domains, name-servers, OS updates/patches etc.

    I really started to enjoy managed servers/instances.

  • jstanley 4 hours ago
    The most interesting thing in here is https://github.com/smhanov/laconic which is the author's "agentic research orchestrator for Go that is optimized to use free search & low-cost limited context window llms".

    I have been doing this kind of thing with Cursor and Codex subscriptions, but they do have annoying rate limits, and Cursor on the Auto model seems to perform poorly if you ask it to do too much work, so I am keen to try out laconic on my local GPU.

    EDIT:

    Having tried it out, this may be a false economy.

    The way it works is it has a bunch of different prompts for the LLMs (Planner, Synthesizer, Finalizer).

    The "Planner" is given your input question and the "scratchpad" and has to come up with DuckDuckGo search terms.

    Then the harness runs the DuckDuckGo search and gives the question, results, and scratchpad to the Synthesizer. The Synthesizer updates the scratchpad with new information that is learnt.

    This continues in a loop, with the Planner coming up with new search queries and the Synthesizer updating the scratchpad, until eventually the Planner decides to give a final answer, at which point the Finalizer summarises the information in a user-friendly final answer.

    That is a pretty clever design! It allows you to do relatively complex research with only a very small amount of context window. So I love that.

    However I have found that the Synthesizer step is extremely slow on my RTX3060, and also I think it would cost me about £1/day extra to run the RTX3060 flat out vs idle. For the amount of work laconic can do in a day (not a lot!), I think I am better off just sending the money to OpenAI and getting the results more quickly.

    But I still love the design, this is a very creative way to use a very small context window. And has the obvious privacy and freedom advantages over depending on OpenAI.

    • andai 2 hours ago
      Yeah, came here to mention that too!

      From the article:

      >To manage all this, I built laconic, an agentic researcher specifically optimized for running in a constrained 8K context window. It manages the LLM context like an operating system's virtual memory manager—it "pages out" the irrelevant baggage of a conversation, keeping only the absolute most critical facts in the active LLM context window.

      The 8K part is the most startling to me. Is that still a thing? I worked under that constraint in 2023 in the early GPT-4 days. I believe Ollama still has the default context window set to 8K for some reason. But the model mentioned on laconic GitHub (Qwen3:4B) should support 32K. (Still pretty small, but.. ;)

      I'll have to take a proper look at the architecture, extreme context engineering is a special interest of mine :) Back when Auto-GPT was a thing (think OpenClaw but in 2023), I realized that what most people were using it for was just internet research, and that you could get better results, cheaper, faster, and deterministically, by just writing a 30 line Python script.

      Google search (or DDG) -> Scrape top N results -> Shove into LLM for summarization (with optional user query) -> Meta-summary.

      In such straightforward, specialized scenarios, letting the LLM drive was, and still is, "swatting a fly with a plasma cannon."

      (The analog these days would be that many people would be better off asking Claw to write a scraper for them, than having it drive Chromium 24/7...)

      • jstanley 53 minutes ago
        > (The analog these days would be that many people would be better off asking Claw to write a scraper for them, than having it drive Chromium 24/7...)

        Possibly. But possibly you have a very long tail of sites that you hardly ever look at, and that change more frequently than you use them, and maintaining the scraper is harder work than just using Chromium.

        The dream is that the Claw would judge for itself whether to write a scraper or hand-drive the browser.

        That might happen more easily if LLMs were a bit lazier. If they didn't like doing drudgery they would be motivated to automate it away. Unfortunately they are much too willing to do long, boring, repetitive tasks.

        • andai 46 minutes ago
          Yeah, I think the ideal setup is two-tier.

          extremely lazy, large model

                  +
          
          extremely diligent Ralph

          Not sure if top model should be the biggest one though. I hear opposite opinions there. Small model which delegates coding to bigger models, vs big model which delegates coding to small models.

          The issue is you don't want the main driver to be big, but it needs to be big enough to have common sense w.r.t. delegating both up[0] and down...

          [0] i.e. "too hard for me, I will ping Opus ..." :) do models have that level of self awareness? I wanna say it can be after a failed attempt, but my failure mode is that the model "succeeds" but the solution is total ass.

  • hackingonempty 4 hours ago
    > If you need a little breathing room, just use a swapfile.

    You should always use a swap file/partition, even if you don't want any swapping. That's because there are always cold pages and if you have no swap space that memory cannot be used for apps or buffers, it's just wasted.

    • berkes 4 hours ago
      I always thought I had to add a swap file to avoid crashing with OOM. I wasn't aware of the cold pages overhead.

      Sometimes that crashing is what I want: a dedicated server running one (micro)service in a system that'll restart new servers on such crashes (e.g. Kubernetes-alike). I'd rather have it crash immediately rather than chugging along in degraded state.

      But on a shared setup like OP shows, or the old LAMP-on-a-vps, i'd prefer the system to start swapping and have a chance to recover. IME it quite often does. Will take a few minutes (of near downtime) but will avoid data corruption or crash-loops much easier.

      Basically, letting Linux handle recovery vs letting a monitoring system handle recovery

  • xxxxxxxx 54 minutes ago
    This is similar to what I do. Linode, Debian, Go, HTMX, SQLite (with modernc.org/SQLite so I have no CGO dependency) and Caddy. If I have apps that need a lot of storage, I just add an S3 bucket.
  • podlp 1 hour ago
    I love SQLite and have ran it even on networked drives with queued writes for read-heavy applications. It’s an incredibly robust piece of software that’s often cost me pennies per month to serve 100k+ monthly users. But there’s definitely a time and place for solid, dedicated database servers like Postgres.
  • prakhar897 3 hours ago
    Do these things actually work? I've seen way too many gurus on twitter claiming to make 10K+ MRR every month. And then they quietly start applying for jobs. or selling courses instead of cashing in.
    • wasmainiac 1 hour ago
      Right? I’m not buying it. Seems like personal PR post.

      Why care so much about so little operating costs when your earning so much?

  • ponco 3 hours ago
    Always good to challenge the narrative - but I don't pay for RDS Postgres because of the WAL, replication, all the beauty of pg etc. I pay RDS because it's largely set and forget. I am gladly paying AWS to think about it for me. I think at a certain scale, this is a really good tradeoff. At the very beginning it could be overkill, and at the top end obviously its unsuitable - but for most of us those tradeoffs are why it's successful.
  • iamflimflam1 1 hour ago
    Would be handy to actually see what these companies do…
  • firefoxd 4 hours ago
    I was writing about this recently [0]. In the 2000s, we were bragging about how cheap our services are and are getting. Today, a graduate with an idea is paying $200 amounts in AWS after the student discounts. They break the bank and go broke before they have tested the idea. Programming is literally free today.

    [0]: https://idiallo.com/blog/programming-tools-are-free

  • ronbenton 2 hours ago
    I want to know how he’s identifying and monetizing businesses
  • pelorat 2 hours ago
    This is how every website used to be run before everyone fell four the cloud trap.
    • pdimitar 51 minutes ago
      Love your username and how it relates to your comment -- and the topic at hand.
  • krypttt 1 hour ago
    We have gone full circle haven't we?
  • WolfOliver 2 hours ago
    20$ vs 300$ does not really matter if you have multiple 10K MRR.
    • signatoremo 1 hour ago
      It isn’t 10k MRR from day one. It also doesn’t make sense to think “well, now that I’m a big boy let’s move to a fancy stack , even if there is no need for it”
      • WolfOliver 36 minutes ago
        for me, using go is the fancy stack
    • elAhmo 1 hour ago
      Exactly. Deciding on some very expensive subscriptions that can cost 1k per month or so might be worth thinking about, but this is just meaningless optimisation.
  • cagz 4 hours ago
    Nice tech read, but without information about which companies, doing what, just feels way too click-baity.
  • gloomyday 3 hours ago
    I think newer developers really need to learn that you can actually do production stuff using bare tools. It is not crazy, especially in the beginning, and it will save you a ton of money and time.
  • Myzel394 2 hours ago
    Does anybody know a good service to self host Ai? My graphics card is shit, I want to rent hardware to run my own models
  • esskay 2 hours ago
    It always make me both roll my eyes and smile a little when i see someone daft enough to think they need some obscene setup - you dont. You never have. You are not Amazon, Microsoft, Google, etc. If you get to the point where you need that kind of setup you're already employing a dev ops team thats telling you that.

    Stick whatever you're working on onto a ~$5/mo cheapo vps from someone like Hetzner, Digitalocean, etc and just get on with building your thing.

  • hirako2000 1 hour ago
    Very interesting insights on long running Llms locally.

    Edited.

  • stavros 1 hour ago
    Forget about the tech stack, how do I get multiple $10k MRR companies?
  • raincole 4 hours ago
    So what's the $10K MMR product, exactly? The lede is buried into nonexistence. Is it this one: https://www.websequencediagrams.com/ ...?

    > Here is the trick that you might have missed: somehow, Microsoft is able to charge per request, not per token. And a "request" is simply what I type into the chat box. Even if the agent spends the next 30 minutes chewing through my entire codebase, mapping dependencies, and changing hundreds of files, I still pay roughly $0.04.

    Really? Lol. If it's true why would you publish it? To ensure Microsoft will patch it up and fuck up your workflow?

    • faangguyindia 1 hour ago
      It's already known. The trick is ms has very small context size. So it won't be much useful.
    • nesk_ 3 hours ago
      >Really? Lol. If it's true why would you publish it? To ensure Microsoft will patch it up and fuck up your workflow?

      It's true and it's their official pricing, so talking about it won't change anything.

      People are spending way too much money with Claude Code while they could simply pay for GitHub Copilot and fire up OpenCode to get the same results but way cheaper.

  • zmmmmm 3 hours ago
    Can anybody validate this Github Copilot trick for accessign Opus 4.6? Sounds too good to be true.
    • brushfoot 2 hours ago
      Longtime happy Copilot user here. It's true.

      The pricing is so good that it's the only way I do agentic coding now. I've never spent more than $40 in a month on Opus, and I give it large specs to work on. I usually spend $20 or so.

    • specproc 3 hours ago
      I'm not what I'd call a heavy user, but I've also mainly been using Copilot in VS Code on the basic sub.

      You do get Opus 4.6, and it's really affordable. I usually go over my limits, but I'm yet to spend more than 5 USD on the surcharges.

      Not seen a reason to switch, but YMMV depending on what you're doing and how you work.

    • nesk_ 3 hours ago
      It is true, it's the official pricing of GitHub Copilot.
      • rzzzt 2 hours ago
        Why is GitHub sticking to per-request pricing when other providers switched to per-token for the high performing models?
  • jofzar 3 hours ago
    I decided to look at their website halfway through the post,

    https://imgur.com/a/7M4PdO6

    This is really what 10k mrr can get you? A badly designed AI slop website that isn't even mobile correctly compatible. The logo is white background on black website like a university project.

    I can't believe that people are willingly spending money on this.

    • yakshaving_jgt 1 hour ago
      You'd be surprised at the amounts household name companies spend on broken software. I've personally seen multiple companies spend tens of thousands paying just for the opportunity to evaluate the broken software. And I don't mean the time taken for their own employees to spend doing the evaluation. I mean that plus forking over large piles of cash.
  • dnnddidiej 4 hours ago
    Is infra where investors money is going? I imagined salaries would be it. Marketing costs maybe.
    • swiftcoder 3 hours ago
      For single-person companies infra can be the single largest expense (especially if you aren't paying yourself yet!). The day you bring a full-time employee onboard, I have a hard time seeing infra costs ever exceeding salaries for most shops
  • ValtteriL 3 hours ago
    >The feedback was simply: "What do you even need funding for?"

    Not clear from the text, but what was your plan using the funding on? If you did not have a plan, what did you expect? VCs want to see how adding more money results in asymmetric returns.

  • m00dy 2 hours ago
    I think making is the easiest part, would be really cool if you also reveal how you distribute what you are making for $20/mo.
  • blurb2023 2 hours ago
    well, the guy runs what he runs and can't complain
  • petesergeant 3 hours ago
    If you can’t articulate what you need funding for, don’t be surprised if nobody will give it to you?
  • sailingcode 4 hours ago
    AI has solved the "code problem", but it hasn't solved the "marketing problem"…
  • brador 4 hours ago
    You already have and had everything you need to scale the business to max and it hasn’t happened so more money won’t help.

    What do you want VC to do?

    You didn’t bring a plan.

    • berkes 4 hours ago
      I was wondering this as well: Why did OP look for VC?

      In my case, I've used a similar strategy of keeping costs under €100/month. (But have sold, or stopped my ventures before hitting such MRRs as OP reports).

      I raised some capital to pay my own bills during development. But mostly to hire freelancers to work on parts that I'm bad at, or didn't have time for: advertising, a specific feature, a library, rewrite-in-rust (wink) or deep research into functional improvements.

  • petesergeant 3 hours ago
    You can get all the advantages and almost none of the constraints by buying a bigger base server for $50/m
  • turdfergusonsr 1 hour ago
    eh the super low cost only comes from low complexity. If complex, people pay more, features cost more, infra costs aren’t that big of a cost compared to dev time.
  • tradertef 6 hours ago
    Not my website. I found this interesting.
  • globalnode 4 hours ago
    nice article, validates some of the things i already thought. although im sure things like aws and database servers etc are still useful for big companies
  • Madmallard 3 hours ago
    So is the slopaclypse gonna destroy HN too? 2nd from the top AI written non-proofread article
  • joongix 59 minutes ago
    [dead]
  • sanghyunp 2 hours ago
    [dead]
  • 6stringmerc 3 hours ago
    What a fascinating article. I especially love the part about writing extremely detailed requests which only cost $0.04 versus the token approach most “vibe code” devs use. Fortunately his tactic is almost impossible to emulate for 90% of the YCombinator audience / HN commentators.

    Why do I know this? Because there had to be a declaration here to stop using ChatGPT and other Agents to write YOUR OWN GODDAMN POSTS. Thinking isn’t your strong suit, Greed is, and taking the time to learn the power of English doesn’t satisfy the latter, so you minimize it to your own detriment.

    Don’t get mad at me. Go punch a mirror.

  • trick-or-treat 4 hours ago
    LMFAO at Linode / Digital Ocean as lean servers.

    Hetzner / Contabo maybe. Cloudflare workers definitely.

    This guy is not at my level and multiple $10k MRR is possible but unlikely.

  • codemog 4 hours ago
    A lot of this advice is good or at least interesting. A lot of it is questionable. Python is completely fine for the backend. And using SQLite for your prod database is a bad idea, just use Postgres or similar.
    • tkcranny 4 hours ago
      There’s a lot to be said about his approach with go for simplicity. Python needs virtual environments, package managers, dependencies on disk, a wsgi/asgi server to run forked copies of the server, and all of that uses 4x-20x the ram usage of go. Docker usually gets involved around here and before you know it you’re neck deep in helm charts and cursing CNI configs in an EKS cluster.

      The go equivalent of just coping one file across to a server a restarting its process has a lot of appeal and clearly works well for him.

      • berkes 3 hours ago
        Yes. It strikes me as odd how many people will put forward Python with the argument of "simplicity".

        It is not. Simple. It may be "easy" but easy != simple (simple is hard, I tend to say).

        I'm currently involved in a project that was initially layed out as microservices in rust and some go, to slowly replace a monolyth Django monstrosity of 12+ years tech debt.

        But the new hires are pushing back and re-introducing python, eith that argument of simplicity. Sure, python is much easier than a rust equivalent. Esp in early phases. But to me, 25+ years developer/engineer, yet new to python, it's unbelievable complex. Yes, uv solves some. As does ty and ruff. But, my goodness, what a mess to set up simple ci pipelines, a local development machine (that doesn't break my OS or other software on that machine). Hell, even the dockerfiles are magnitudes more complex than most others I've encountered.

        • wanderlust123 2 hours ago
          I am not following the difficulties you have mentioned. Setting up a local dev environment in Python is trivial with UV.

          The only major downside of Python is its got a bit poor module system and nothing as seamless as Cargo.

          Beyond that the code is a million times easier to understand for a web app.

    • chrismorgan 4 hours ago
      Python will take you a long way, but its ceiling (both typical and absolute) is far lower than the likes of Go and Rust. For typical implementations, the difference may be a factor of ten. For careful implementations (of both), it can be a lot more than that.

      Does the difference matter? You must decide that.

      As for your dismissing SQLite: please justify why it’s a bad idea. Because I strongly disagree.

      • mattmanser 4 hours ago
        What a load of nonsense.
        • danhau 4 hours ago
          Why is it nonsense? Sounds reasonable to me.
          • blitzar 3 hours ago
            > its ceiling (both typical and absolute) is far lower

            If you plan to remaining smaller than instagram, the ceiling is comfortably above you.

            • pdimitar 38 minutes ago
              There are a myriad middle states in-between "frupid" (so frugal that it's stupid) and "Instagram scale".

              Python requires much more hand-holding that many don't want to do for good reasons (I prefer to work on the product unimpeded and not feeling pride having the knowledge to babysit obsolete stacks carried by university nostalgia).

              With Go, Rust, Zig, and a few others -- it's a single binary.

              In this same HN thread another person said it better than me: https://news.ycombinator.com/item?id=47737151

            • jasdfwasd 3 hours ago
              I plan to remain smaller than two VMs
    • gls2ro 4 hours ago
      Why is SQLite bad for production database?

      Yes, it has some things that behave differently than PostgreSQL but I am curious about why you think that.

      • trick-or-treat 4 hours ago
        For read only it can be a great option. But even then I would choose D1 which has an amazing free tier and is sqlite under da hood.
        • saltmate 3 hours ago
          But then you don't get the benefits of having the DB locally, with in-process access.
          • trick-or-treat 2 hours ago
            It's local to the worker? I don't understand what you mean.
    • cenamus 4 hours ago
      I think the point is that your Python webapp will have more problems scaling to let's say 10,000 customers on a 5$ VPS tham Go. Of course you can always get beefier servers, but then that adds up for every project
      • harvey9 3 hours ago
        At 10,000 paying customers I don't think it is frivolous to move to a 10/month vps, or maybe a second 5/month one for fail-over.
  • komat 4 hours ago
    Cool but missing the Claude Code or Coding Agent part imo
    • PhilippGille 4 hours ago
      He specifically mentions that he is using GitHub Copilot because of how Microsoft bills per request instead of token.
  • mstaoru 3 hours ago
    While I applaud the acumen, this reads like watching a kid standing on the 3rd floor balcony shouting "look what I can do!"

    $20/month. Yeah. Great, but why? You get a lot of peace of mind with "real" HA setup with real backups and real recovery, for not much more than $20, if you are careful.

    Another half of article is about running "free, unlimited" local AI on a GPU (Santa brought it) with, apparently, free electricity (Santa pays for it).