"For security reasons (and to protect the PII of all our users and customers), everything was being shredded and/or destroyed" - what!? That is ridiculous. The only possible thing you need to destroy would be the hard drives. Why on earth would you shred everything?
This is common compliance nomenclature. The only people paying the high cost to have a full sized piece of equipment destroyed are governments or R&D companies with unique prototypes.
The hard drives are most likely being shredded since that is a common practice and certification feature offered by most disposal companies.
The servers are being "destroyed" because thats how they will be accounted for in inventory and tax purposes to account for full depreciation. The company isn't "selling" the servers to the disposal company so they are marked as "destroyed."
Unless specified in the contract the disposal company will sell the chassis without the drives to a reseller or if they are being paid to dispose of the system, they will separate the components and recycle the metal.
The same goes for the power and network cables, they will go off to a recycler, its how disposal companies off-set their pricing.
This is correct. Once upon a time, I worked for a company doing this type of disposal. They had 18 wheelers full of equipment show up a couple of times per week. Drives were pulled and put in a pile to be shredded. Everything else was tested and either restored to working order and sold in bulk to organizations in need of cheaper computers, parted out on eBay, or scrapped.
Everything else was tested and either restored to working order and sold in bulk to organizations in need of cheaper computers, parted out on eBay, or scrapped.
Happy to say that my company shreds its old hard drives, then puts new ones in the old laptops and desktops and spruces them up for reuse.
We team up with a local organization to give them to poor children and families each Christmas. IT always sends around a bunch of photos afterward of kids who don't always know where they'll sleep from night to night clinging to a used computer like it's a life ring and they're in the middle of the ocean. I've been told that for some of them, it's the only present they'll receive the entire year.
We don't qualify for a tax break or any other renumeration for this. We do it because it's a nice thing to do.
(I have no idea what happens to servers. Not my department.)
Also thought that was quite wasteful. Even ethernet cables are cut. Why not just put them online for free pickup?
Also I can't help wondering if the switch to cloud makes sense for stack overflow now again because their traffic collapsed. I took the whole post as something that should be mourned a bit, not gleefully destroyed.
I wondered that, too. The scalability of cloud is ok when you are growing but truly wonderful when you are shrinking.
Stack Overflow used to serve over half a billion page views per month on only 25 servers[0]. I wonder how much traffic they get now, excluding crawlers?
Anecdotally, one of the reasons for SO's decline is that an LLM gives you the SO answer without the condescending snark. SO could have fixed that years ago, but didn't. So perhaps it might turn out to be yet another example of the dictum that culture eats strategy for breakfast.
Where? Should they drop them at a starbucks out front? Have an employee volunteer their home address? Put them out front at the office? How long should they hold onto the "free" cables that people are going to ask ridiculous questions such as "Do they work? When were they tested?" Are you going to force people to take all the cable or allow random selections? Are you going to waste the cost savings of moving to the cloud to have one of your tech people monitor requests for pickup? What if no one picks them up are you calling the recycler out again to pick up the cables you could have just given them?
Anyone who has posted "free" things online knows it comes a cost, thats the logic part I was referring too. When you work through the scenarios the "logical" conclusion is to give them to the recycler that you already have out at the datacenter for the systems you are decommissioning.
This is such an absurd argument. It's wild that you took the time to write this. But the next time you run into a problem like this, just ask a co-worker. I'm sure they can handle the logistics of giving something away for free.
I would say they evaluated it would cost more for them to remove the HDD and sell the machine than just shred them. And they would not risk to forget a HDD inside
The reasoning could be that this makes reliably scaling down (and thus keep making a profit) easier, starting with getting rid of SREs.
We have similar movement going on with Xing here in Hamburg, Germany (once conceived as a LinkedIn competitor).
Great names that still have a lot of momentum, but are expected by ownership to slow down.
Reminds me of Scott Galloway’s most profitable investment having been a yellow pages company. Yes, the market shrunk, but they could shrink running costs as fast or even faster.
"Stack Overflow no longer has any physical datacenters or offices; we are fully in the cloud and remote!"
Am I misunderstanding something here? They're just transferring from a physical datacenter owned and managed directly by them, to a small rented/leased part of those owned and managed by someone bigger. Since digital data can't just exist in a magical, airy fairy realm, it has to physically be somewhere either way.
Wouldn't it have been safer to control their own physical servers, considering how they mention protecting user information?
It means that Stack Overflow as a company doesn't need to think about the exact location, except in so far as they need the actual operator to satisfactorily assert that it's secure and operating in the right jurisdiction with enough backup redundancies.
As a Private Equity holding, it's safer to outsource regulatory/compliance/security risk and liability to a reputable third party, where reputable is the proverbial "nobody gets fired for buying IBM" of the cloud era.
This is a terrible waste. Wiping all storage would take a day at best; this hardware is still worth $10-50k. They could donate it.
Then again, they’re migrating to Azure and the whole thing ran for years on SQL Server; being good at tech was never these ex-MS guys’ strong suit. This kind of forklifting is expected from this specific type of corporate droid, it’s how they’ve always done it. Entire industries run just like this, and it’s terrible and stupid.
Nobody likes to hear it but hardware just doesn’t last that long.
Even if you reuse the hardware by selling it off to homelab types or donating it, it’ll get faulty in a few very short years. It’s already been running 24/7 for a long time.
It’ll also be far less energy efficient in operation compared to newer hardware generations.
I also don’t really see what your beef with SO is for using SQL Server. Are you suggesting that companies do a major refactor every time their software stack goes out of style? There really isn’t anything wrong with SQL Server nor is there anything wrong with SO at a technical level.
You might be upset that entire industries run on imperfect tech stacks but value isn’t driven by tech stack choice. I’m not going to buy a sandwich from a different restaurant because they are using better tech than the restaurant with bad food.
> Nobody likes to hear it but hardware just doesn’t last that long.
My experience completely contradicts this, both at work and otherwise. The typical longevity of servers has always astounded me. Though my current biggish server (not including routers, etc) is a Zen 1 on 14nm built pre-pandemic; I've read 12nm and smaller might be more susceptible to degradation. RAM feature size has remained fairly constant, but I wouldn't be surprised if SSD MLC longevity has degraded.
The real issue, IMO, is that when you have a small set of servers any hardware issues become a bigger hassle. It's precisely because you can go years with a rack of servers humming along that when there is an issue, it feels quite intrusive and annoying.
Using a 6-7 year old system as an example of impressive longevity isn’t particularly convincing. You’re at the back end of their useful lifespan just like I mentioned.
Any company that switches from real hardware to "the cloud" is going to triple their compute costs. They're obviously making too much money. With them not even selling their old hardware, they are doing the equivalent of setting large piles of money on fire
It could just be that they think the increase in their cloud bill will be low enough to justify switching and stopping employing people who currently run things in their datacentre.
If you have the knowledge and time it doesn't make sense. That said it's def not a zero cost setup. You are playing sysadmin which isn't a low paid position in itself. You just amortize the actual cost.
Hasn't stack overflow been in steady decline for years? How can they justify the huge increase in hardware cost that going to a cloud provider brings? I suppose it makes it easier to rapidly scale down your footprint to meet lower demand.
if traffic has been declining, their physical datacenter is likely to have been over provisioned for current usage levels. Laying off most of their SREs is just another bonus for the bottom line
"our datacenter vendor in NJ decided to shut down that location, and we needed to be out by July 2025."
Ah, so you're saying that going "on-prem" does not in fact give you total control over the situation? How peculiar! Has AWS ever shut down an EC2 region and forced everyone out?
I think this is more akin to shutting down an AZ, rather than a region, which certainly does happen. Except AZ lettering is randomised per AWS account, so your 'us-east1-a' isn't my 'us-east1-a'. Which means AWS can migrate people away, over time. I believe older accounts which still use the old AZ are given notice that it is closing.
Plus, there was the whole closure of AWS EC2 Classic, replaced with AWS VPC.
This another reason why deploying over multiple AZs has its benefit. Not just for technical failure, but it means you can still move should one region close down.
I suppose an interesting questions is: would I prefer to move a single-AZ deployment such as this in the cloud, or in the real world? And honestly I can see the pros and cons of each.
In the cloud it involves a bunch of engineering time (possibly minimal, likely a lot more given reality). In the real-world it involves a temporary fibre connection to the next DC over, and a gradual or rapid move of hardware with the help of some specialist contractors (for example). But at least the state and implementation quirks move with the compute. I can see it either way, but I can feel myself wanting to believe in the latter. There is something about trucking servers across town that appeals me.
That's a red herring: unless you generate your own electricity and have at least a few uplinks, you're never going to have "total control over the situation".
On-prem still may be preferable to cloud for some use cases.
The hard drives are most likely being shredded since that is a common practice and certification feature offered by most disposal companies.
The servers are being "destroyed" because thats how they will be accounted for in inventory and tax purposes to account for full depreciation. The company isn't "selling" the servers to the disposal company so they are marked as "destroyed."
Unless specified in the contract the disposal company will sell the chassis without the drives to a reseller or if they are being paid to dispose of the system, they will separate the components and recycle the metal.
The same goes for the power and network cables, they will go off to a recycler, its how disposal companies off-set their pricing.
What happens with the shredded material? Is it recycled? Sent to heavy industries?
Happy to say that my company shreds its old hard drives, then puts new ones in the old laptops and desktops and spruces them up for reuse.
We team up with a local organization to give them to poor children and families each Christmas. IT always sends around a bunch of photos afterward of kids who don't always know where they'll sleep from night to night clinging to a used computer like it's a life ring and they're in the middle of the ocean. I've been told that for some of them, it's the only present they'll receive the entire year.
We don't qualify for a tax break or any other renumeration for this. We do it because it's a nice thing to do.
(I have no idea what happens to servers. Not my department.)
Also I can't help wondering if the switch to cloud makes sense for stack overflow now again because their traffic collapsed. I took the whole post as something that should be mourned a bit, not gleefully destroyed.
Stack Overflow used to serve over half a billion page views per month on only 25 servers[0]. I wonder how much traffic they get now, excluding crawlers?
Anecdotally, one of the reasons for SO's decline is that an LLM gives you the SO answer without the condescending snark. SO could have fixed that years ago, but didn't. So perhaps it might turn out to be yet another example of the dictum that culture eats strategy for breakfast.
[0] https://highscalability.com/stackoverflow-update-560m-pagevi...
Lets use some logic here. The disposal company is taking the cables with them to recycle them for the copper wire. Same with power cables.
Anyone who has posted "free" things online knows it comes a cost, thats the logic part I was referring too. When you work through the scenarios the "logical" conclusion is to give them to the recycler that you already have out at the datacenter for the systems you are decommissioning.
(Put them in a box that says "free" in the lobby of the NY office then throw what hasn't been taken a month later)
https://news.ycombinator.com/item?id=3248911 - Why Stack Exchange Isn’t in the Cloud (2011)
The original blog post is down but available here: http://web.archive.org/web/20120120201529/http://blog.server...
We have similar movement going on with Xing here in Hamburg, Germany (once conceived as a LinkedIn competitor).
Great names that still have a lot of momentum, but are expected by ownership to slow down.
Reminds me of Scott Galloway’s most profitable investment having been a yellow pages company. Yes, the market shrunk, but they could shrink running costs as fast or even faster.
Am I misunderstanding something here? They're just transferring from a physical datacenter owned and managed directly by them, to a small rented/leased part of those owned and managed by someone bigger. Since digital data can't just exist in a magical, airy fairy realm, it has to physically be somewhere either way.
Wouldn't it have been safer to control their own physical servers, considering how they mention protecting user information?
whats the alternative? a datacenter that exists only in my imagination?
It's a bit of a shame, but I guess also with declining traffic and revenue, they're also downsizing.
Then again, they’re migrating to Azure and the whole thing ran for years on SQL Server; being good at tech was never these ex-MS guys’ strong suit. This kind of forklifting is expected from this specific type of corporate droid, it’s how they’ve always done it. Entire industries run just like this, and it’s terrible and stupid.
Even if you reuse the hardware by selling it off to homelab types or donating it, it’ll get faulty in a few very short years. It’s already been running 24/7 for a long time.
It’ll also be far less energy efficient in operation compared to newer hardware generations.
I also don’t really see what your beef with SO is for using SQL Server. Are you suggesting that companies do a major refactor every time their software stack goes out of style? There really isn’t anything wrong with SQL Server nor is there anything wrong with SO at a technical level.
You might be upset that entire industries run on imperfect tech stacks but value isn’t driven by tech stack choice. I’m not going to buy a sandwich from a different restaurant because they are using better tech than the restaurant with bad food.
My experience completely contradicts this, both at work and otherwise. The typical longevity of servers has always astounded me. Though my current biggish server (not including routers, etc) is a Zen 1 on 14nm built pre-pandemic; I've read 12nm and smaller might be more susceptible to degradation. RAM feature size has remained fairly constant, but I wouldn't be surprised if SSD MLC longevity has degraded.
The real issue, IMO, is that when you have a small set of servers any hardware issues become a bigger hassle. It's precisely because you can go years with a rack of servers humming along that when there is an issue, it feels quite intrusive and annoying.
$50k is $50k.
I could go months without touching the thing running the application.
Or lighting could blow the UPS.
Or the mainboard could flake out.
Or the RAID could drop a disk.
Or you could get hit by a bus.
Again, there is never any free, at best it is cost deferred.
Ah, so you're saying that going "on-prem" does not in fact give you total control over the situation? How peculiar! Has AWS ever shut down an EC2 region and forced everyone out?
Plus, there was the whole closure of AWS EC2 Classic, replaced with AWS VPC.
This another reason why deploying over multiple AZs has its benefit. Not just for technical failure, but it means you can still move should one region close down.
I suppose an interesting questions is: would I prefer to move a single-AZ deployment such as this in the cloud, or in the real world? And honestly I can see the pros and cons of each.
In the cloud it involves a bunch of engineering time (possibly minimal, likely a lot more given reality). In the real-world it involves a temporary fibre connection to the next DC over, and a gradual or rapid move of hardware with the help of some specialist contractors (for example). But at least the state and implementation quirks move with the compute. I can see it either way, but I can feel myself wanting to believe in the latter. There is something about trucking servers across town that appeals me.
On-prem still may be preferable to cloud for some use cases.