Questions for folks running this in production:
What do you use today? (MQTT broker + ??, Kafka/Redpanda/NATS, Redis Streams, custom log files, embedded DB, etc.)
Where do you buffer during outages: append-only log, SQLite/RocksDB, queue-on-disk, something else?
How do you handle backpressure when disk is near full? (drop policy, compression, sampling, prioritization)
What’s your failure nightmare: corruption, replay storms, duplicates, “stuck” consumer offsets, disk-full, clock skew?
What guarantees do you actually need: zero-loss vs “best effort” (and where do you draw that line)?
What metrics/alerts matter most on gateways? (queue depth, replay rate, oldest event age, fsync latency, disk usage, etc.)
I’d love to learn what works, what breaks, and what you wish existing tools did better.
I have a system that runs on edge services and captures everything to logs through FluentBit. Then there's a cron job that compresses, encrypts, and tries to send the logs to device specific S3 buckets. If the on device logs get too big they start dropping old logs first, with a heuristic for certain logs being more/less important. When devices reconnect to the cloud they start pushing logs as quickly as they can, the cloud infra backfills metrics as they arrive.
Once in S3, triggers start a series of lambdas to decrypt, decompress, analysis. Works well, easy to reason about.
The backend can easily be swapped out for something else. The harder part is the log compress/encrypt/rotate. It's important that you don't treat all logs exactly the same. Some are much more important and should be preserved over others.
A couple quick questions if you don’t mind:
Roughly what volume are you pushing per device (MB/day or events/sec), and what’s your typical offline window?
What’s your biggest failure mode today: disk-full/rotate policy, encryption key handling, replay storms on reconnect, or Lambda fanout/cost?
I’m thinking Ayder could replace the “rotate → ship” backend with a durable local log + priority queues + replay, but you’re right that the hardest part is the policy (what to drop first, how to bound disk, and how to preserve critical streams). If you’re open, I’d love to learn what heuristics you ended up with.
If anyone is open to a tiny design-partner pilot (30–60 min): run docker compose → ingest some telemetry → simulate outage (kill -9 / disconnect) → restart → verify replay + zero loss. I’ll do white-glove onboarding and turn the learnings into a short case study (can be anonymous).