This post doesn't quite comprehend why Meta made a 49% investment instead of an acquisition.
The path Meta chose avoided global regulatory review. FTC, DOJ, etc and their international counterparts could have chosen to review and block an outright acquisition. They have no authority to review a minority investment.
Scale shareholders received a comparable financial outcome to an acquisition, and also avoided the regulatory uncertainty that comes with govt review.
It was win/win, and there's a chance for the residual Scale company to continue to build a successful business, further rewarding shareholders (of which Meta is now the largest), which is just like wildcard upside and was never the point of the original deal.
You are literally correct, but not engaging in the point. Meta shares in Scale are explicitly non-voting shares. Influence/control post minority investment matters as to whether the deal can be reviewed via Clayton Act. Meta did everything they could to avoid regulatory review with this transaction, and it worked.
The point remains wrong even if you pivot to this argument. Every major competitor has cancelled their deals with Scale AI and there are even rumours that the company will wind down and perhaps eventually stop working while key workers are essentially being absorbed into Meta. This is textbook anti-competitive behaviour and more than enough reason to warrant antitrust investigations by oversight agencies in the major markets.
But not because of majority/minority reasons as the other comment implies. It would be utterly ridiculous if you could just buy 49% of each of your competitors without any possibility for the government to interfere.
I agree that this is a possible reason. Meta wants to move fast and m&a is too slow for their tastes. I make the case in the article though that the actual acquisition doesn't really make sense for metas core business, but I agree it's possible.
I disagree that this is a win/win. Scale stock is still illiquid, and people who remain at scale or have scale stock are now stuck with shares that are actually less valuable -- even though the market price has ostensibly gone up, the people who made the stock valuable are gone
Shareholders retain their stock, but also received a dividend equivalent to selling all their shares at a premium to the previous valuation. It is actually an incredible deal for them. (Source: I'm one of them.)
O shit really? That's a massive update and possibly invalidates a lot of this article. It's also the first I've heard of this -- can you tell me more? Are you saying everyone essentially got bought out? (if you want to dm me in case its still pseudo-private, feel free to at theahura at gmail)
ETA: there's now an update header on the article based on this information
Reportedly the only reason they did this at all is that Wang asked to get a return for investors and employees. It was not Meta who wanted to own any part of Scale (makes sense too, they are already users of Scale data and don't really need anything they get from owning Scale as a company / business).
I guess OpenAI got a good deal at 6.5b for Jony Ive.
>Supposedly, Llama 4 did perform well on benchmarks, but the user experience was so bad that people have accused Meta of cooking the books.
This is one of those things that I've noticed. I don't understand the benchmarks, and my usage certainly isn't going to be wide ranging as benchmarks, but I hear "OMG this AI and benchmarks" and then I go use it and it's not any different for me ... or I get the usual wrong answers to things I've gotten wrong answers to before, and I shrug.
I think there's basically the benchmarks, and then "the benchmarks". The former is, like, actual stuff you can quantify and game. And the latter is the public response. The former is a leading indicator but it's not always accurate, and the latter is what actually matters. I knew Gemini 2.5 was better than Claude 3.7 based on the Reddit reaction lol
Any industry that cannot depend on artificial benchmarks, and instead has to depends on user benchmarks is a labor intensive industry. If it turns out that LLMs in their current form do depend a lot on manual user benchmarks, then they will always require human workers. Kind of like how AI means actually Indians.
Just out of curiosity, what were the reactions like from what you saw? I had the opposite take from Reddit, which proved to be incorrect. So I'm just curious how you read(more correctly than me) the reactions vis-a-vis Reddit.
I keep meaning to try Gemini for coding but honestly they’ve all gotten reasonably good for my use cases that I don’t think I need to try a lot of new ones / new versions.
Not just hardware, he ruined UI design as well. Everything iOS 7+ (which hasleaked into macOS) is basically his doing, or at least had to be approved by him. Trading Scott Forstall for Johnny Ive was probably the the biggest blunder that Cook made, which put Apple on its current trajectory in terms of software & ui decline.
I learned to program C in high school on the original iMacs and thanks to OS 9's lack of true multi-tasking and Ive's penchant for lack of buttons, anytime our code crashed or made an infinite loop, we would have to reach around and unplug the computer from the wall. The single power button on the front was a soft button that required CPU cycles to operate.
There was also a tiny hole in the front that you could insert a paperclip to reset the machine, but unplugging from the wall was faster.
OpenAI didn’t get Jony Ive though. They bought an existing joint venture, i.e. he cashed out. They have a contract with his design firm for him to keep working on the project, but it’s basically great for him (massive payout, ongoing payments) and not particularly good for OpenAI.
In retrospect it wasn't really that the models were that bad compared to benchmarks, it was that Meta didn't work with inference engine providers ahead of time to integrate the new architecture like other teams usually do, so it was even more bugged than expected for the first few weeks.
The second problem that compounded it was that both L4 models were too large for 99.9% of people to run, so the only opinion most people had of it was what the few that could load it were saying after release. And they weren't saying good things.
So after inference was fixed the reputation stuck because hardly anyone can even run these behemoths to see otherwise. Meta really messed up in all ways they possibly could, short of releasing the wrong checkpoint or something.
The notion that Scale AI's data is of secondary value to Wang seems wrong: data-labeling in the era of agentic RL is more sophisticated than the pejorative view of outsourcing mechanical turk work at slave wages to third world workers, it's about expert demonstrations and work flows, the shape of which are highly useful for deducing the sorts of RL environments frontier labs are using for post-training. This is likely the primary motivator.
> LLMs are pretty easy to make, lots of people know how to do it — you learn how in any CS program worth a damn.
This also doesn't cohere with my understanding. There's only a few hundred people in the world that can train competitive models at scale, and the process is laden with all sorts of technical tricks and trade secrets. It's what made the deepseek reports and results so surprising. I don't think the toy neural network one gets assigned to create in an undergrad course is a helpful comparison.
Relatedly, the idea that progress in ML is largely stochastic and so horizontal orgs are the only sensible structure seems like a weird conclusion to draw from the record. Saying Schmidhuber is a one hit wonder, or "The LLM paper was written basically entirely by folks for whom "Attention is All You Need" is their singular claim to fame" neglects a long history of foundational contributions in the case of the former, and misses the prolific contributions of Shazeer in the latter. Alec Radford is another notable omission as a consistent superstar researcher. To the point about organizational structure, OpenAI famously made concentrated bets contra the decentralized experimentation of Google and kicked off this whole race. Deepmind is significantly more hierarchical than Brain was and from comments by Pichai, that seemed like part of the motivation for the merger.
- Could be wrong about Scale. I'm going off folks I know at client companies and at scale itself.
- idk I've trained a lot of models in my time. It's true that there's an arcane art to training LLMs, but it's wrong that this is somehow unlearnable. If I can do it out of undergrad with no prior training and 3 months of slamming my head into a wall, so can others. (Large LLMs are imo not that much different from small ones in terms of training complexity. Tools like torch and libraries like megatron make these things much easier ofc)
- there are a lot of fantastic researchers and I don't mean to disparage anyone, including anyone I didn't mention. Still, I stand by my beliefs on ml. Big changes in architecture, new learning techniques, and training tips and tricks come from a lot of people, all of whom are talking to each other in a very decentralized way.
> you probably dont think people that went to state schools are even human
on the contrary. I have been quite vocal about why I felt my education was lacking and the respect I have for those who have gone for nontraditional paths
There are three centres of "AI" gravity: GenAI, FAIR & RL-R
Fair is fucked, they've been passed about, from standalone, to RL-R then to "production" under industrial dipshit Cox. A lot of people have left, or been kicked out. It was a power house, and PSC (the 6month performance charade killed it)
GenAI was originally a nice tight and productive team. Then the facebook disease of doubling the team every 2 months took over. Instead of making good products and dealing with infra scaling issues, 80% of the staff are trying to figure out what they are supposed to be doing. Moreover most of the leadership have no fucking clue how to do applied ML. Also they don't know what the product will be. So the answer is A/B testing what ever coke dream Cox dreamt up that week.
RL-R has the future, but they are tied to either Avatars, which is going to bomb. It'll bomb because its run by an prick who wants perfect rather than deliverable. Moreover splats perform way better than the dumbarse fully ML end-to-end system they spend the last 15 billion trying to make.
Then there is the hand interaction org, which has burnt through not quite as much cash as Avatars, but relies on a wrist device that has to be so tight it feels like a fucking handcuff. That and they've not managed to deliver any working prototype at scale.
The display team promised too much and wildly underdelivered, meaning that orion wasn't possible as a consumer product. Which lets the write team off the hook for not having a comfortable system.
Then there is the mapping team who make research glasses that hovers up any and all personal information with wild abandon.
RL-R had lots of talent. But the "hire to fire" system means that you can't actually do any risky research, unless you have the personal favour of your VP. Plus, even if you do perfect research. getting it to product is a nightmare.
At least they could pivot Hotdog or Not into a dick-pic classifier, which was less trivial when that show was airing.
$14 billion for a glorified mechanical turk platform is bananas. Between this, the $6 billion Jonny Ive acquisition, and a ChatGPT wrapper for doctors called Abridge having a $5 billion valuation, this AI fraud bubble is making pets.com look like a reasonable investment.
Meta is so profitable with its ad network it can spend tens of billions of dollars on money-losing losing projects and the stock still keeps going up, and the PE ratio is not that high either. Crazy.
Amazon does this with Jeff Bezos' project Alexa [0].
We really need to start taxing these companies appropriately. Sad that the US treats business better then their own citizens. A business that goes bankrupt can shed their debt while a person goes bankrupt will retain their debt.
The guys comments on AGI do not follow scientific method - my hypothesis is Meta wants to have enough AGI influencers so that they can dictate definition as goalposts change, not quite based on skill set obv.
The last two paragraphs sure contorted my face into an…interesting expression.
I’m confused at how Zuck has proven himself to be a particularly dynamic and capable CEO compared to peers. Facebook hasn’t had new product success outside of acquisitions in at least a decade. The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing. Meta Quest is a cash-bleeding joke of a side quest that Zuck thought justified changing the name of the company.
The kind of customer trustworthiness gap between Meta and competitors like Microsoft, Google, and Amazon is astounding, and I would consider it a major failure by Meta’s management that was entirely preventable. [1]
Microsoft runs their own social network (LinkedIn) and Google reads all your email and searches and they are somehow trusted more than twice as much as Meta. This trust gap actively prevents Meta from launching new products in areas that require trustworthiness.
Personally I don’t think Meta is spending $14B to hire a single guy, they’re spending $14B in hopes of having a stake in some other company that can make a successful new product - because by now they know that they can’t have success on their own.
I'm a Zuck fan, and I think I could mount a compelling case for that take. But that was not the article to do so. And people can def disagree, there's good arguments on the other side
Facebook hasn’t had new product success outside of acquisitions in at least a decade.
You don't need a 'new trick' when the main business is so frigging profitable and scalable. There is this myth that companies need to keep innovating. At a certain point, when the competition is so weak, you don't.
Tobacco stocks have been among the best investment over the past century even though the product never changed. Or Walmart--it's the same type of store, but scaled huge.
The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing.
Not really. Instagram is more popular than ever and much more profitable. It's not all or nothing .Both sites can coexist and thrive, like Pepsi and Coca-Cola.
Sure, a lot of companies don’t need to keep innovating.
But still, TikTok passed Instagram active users. This would be like if Universal theme parks surpassed Disney Parks in revenue. Sure, they can both coexist, but it’s an embarrassing reflection of mismanagement when a company that is so far ahead is surpassed by a competitor with better management working with fewer resources.
And there are a lot of examples like IBM, Kodak, and General Electric where the company appears untouchable with no need to innovate until suddenly they really do need to innovate and it all collapses around them.
- Quest was an acquisition (not an original idea) and has been a money sink, VR market is declining.
- Instagram was an acquisition and as mentioned was surpassed in userbase by a newcomer despite cloning TikTok’s functionality relatively early.
- Facebook has a declining userbase. It’s still huge but it’s declining.
- Threads is the most impressive “original” new Meta product in terms of adoption but it’s about half the size of Twitter if not smaller, and growth is reported as slowing. It’s also mostly recruiting the same users who are already in Instagram.
- Meta Workplace got sunsetted and failed
I’m not betting against Meta as I acknowledge that they’re a cash cow, but I think that a better CEO for the company exists and could have done better. I think a better CEO would have beat TikTok, I think a better CEO would have Meta competing in more platforms than just social media, and a better CEO would have been able to maintain a better reputation for privacy and security or at least do a better job of separating Facebook’s reputation from the rest of the company. Something like Meta Workplace was doomed from the start at least in part because companies don’t trust Meta with their sensitive data. That is the CEO’s fault.
Sorry for the late reply. I was being somewhat sarcastic, or maybe not. During the "dot com" era, investors were throwing money at anybody willing to take it, and a start-up was questioned about whether it was maintaining a sufficient "burn rate." In hindsight, it was a speculative bubble.
The company that I work for survived that era by not doing those things.
I guess, but if you invested $1 million into 100 companies in 2000 and one of them was Amazon (and other 99 went to zero) your investments combined would have had an annual growth of 24% over the past 25 years.
Why should they be in that market at all? I don't understand why we've got to this place where every big tech company needs to have it's fingers in every pie or "they're falling behind". AI / LLMs look to be on their way to being infrastructure more than product in and of themselves. A patient huge company could build on top of what others are doing and wait for the market to settle, then buy for vertical integration.
They're not even in the same ballpark as the SOTA models, and they're not even in the same ballpark as the SOTA open source models, which was supposed to be their niche.
But even if they were, it's not immediately clear how they plan to make any money with having an open source model. So far, their applications of their AI, i.e. fake AI characters within their social media, are some of the dumbest ideas I've ever heard of.
Deepseek is a solid competitor in their niche, the open-source LLM niche. Llama 4 didn't impress, and so they are very vulnerable even in the free niche they carved out for themselves
How would you even begin to calculate that? They use AI to make their products more valuable to advertisers, but there's no way to say how much their revenue would decline if they suddenly eliminated AI.
The path Meta chose avoided global regulatory review. FTC, DOJ, etc and their international counterparts could have chosen to review and block an outright acquisition. They have no authority to review a minority investment.
Scale shareholders received a comparable financial outcome to an acquisition, and also avoided the regulatory uncertainty that comes with govt review.
It was win/win, and there's a chance for the residual Scale company to continue to build a successful business, further rewarding shareholders (of which Meta is now the largest), which is just like wildcard upside and was never the point of the original deal.
That's just wrong. Partial acquisitions and minority shareholdings don't allow you to bypass antitrust investigations.
See 15 U.S.C. §18 for example. It is similar in the EU.
I disagree that this is a win/win. Scale stock is still illiquid, and people who remain at scale or have scale stock are now stuck with shares that are actually less valuable -- even though the market price has ostensibly gone up, the people who made the stock valuable are gone
ETA: there's now an update header on the article based on this information
I don’t really understand the purpose of all this, Scale is not an ai research lab, it’s basically Fiverr.
Why would bringing people over from there make Meta more compelling to AI researchers?
>Supposedly, Llama 4 did perform well on benchmarks, but the user experience was so bad that people have accused Meta of cooking the books.
This is one of those things that I've noticed. I don't understand the benchmarks, and my usage certainly isn't going to be wide ranging as benchmarks, but I hear "OMG this AI and benchmarks" and then I go use it and it's not any different for me ... or I get the usual wrong answers to things I've gotten wrong answers to before, and I shrug.
I can’t imagine what kind of value he could bring to OpenAI.
The dude ruined Apple laptops for 5 years, he really should be an industry castaway.
There was also a tiny hole in the front that you could insert a paperclip to reset the machine, but unplugging from the wall was faster.
The second problem that compounded it was that both L4 models were too large for 99.9% of people to run, so the only opinion most people had of it was what the few that could load it were saying after release. And they weren't saying good things.
So after inference was fixed the reputation stuck because hardly anyone can even run these behemoths to see otherwise. Meta really messed up in all ways they possibly could, short of releasing the wrong checkpoint or something.
The notion that Scale AI's data is of secondary value to Wang seems wrong: data-labeling in the era of agentic RL is more sophisticated than the pejorative view of outsourcing mechanical turk work at slave wages to third world workers, it's about expert demonstrations and work flows, the shape of which are highly useful for deducing the sorts of RL environments frontier labs are using for post-training. This is likely the primary motivator.
> LLMs are pretty easy to make, lots of people know how to do it — you learn how in any CS program worth a damn.
This also doesn't cohere with my understanding. There's only a few hundred people in the world that can train competitive models at scale, and the process is laden with all sorts of technical tricks and trade secrets. It's what made the deepseek reports and results so surprising. I don't think the toy neural network one gets assigned to create in an undergrad course is a helpful comparison.
Relatedly, the idea that progress in ML is largely stochastic and so horizontal orgs are the only sensible structure seems like a weird conclusion to draw from the record. Saying Schmidhuber is a one hit wonder, or "The LLM paper was written basically entirely by folks for whom "Attention is All You Need" is their singular claim to fame" neglects a long history of foundational contributions in the case of the former, and misses the prolific contributions of Shazeer in the latter. Alec Radford is another notable omission as a consistent superstar researcher. To the point about organizational structure, OpenAI famously made concentrated bets contra the decentralized experimentation of Google and kicked off this whole race. Deepmind is significantly more hierarchical than Brain was and from comments by Pichai, that seemed like part of the motivation for the merger.
- idk I've trained a lot of models in my time. It's true that there's an arcane art to training LLMs, but it's wrong that this is somehow unlearnable. If I can do it out of undergrad with no prior training and 3 months of slamming my head into a wall, so can others. (Large LLMs are imo not that much different from small ones in terms of training complexity. Tools like torch and libraries like megatron make these things much easier ofc)
- there are a lot of fantastic researchers and I don't mean to disparage anyone, including anyone I didn't mention. Still, I stand by my beliefs on ml. Big changes in architecture, new learning techniques, and training tips and tricks come from a lot of people, all of whom are talking to each other in a very decentralized way.
My opinions are my own, ymmv
Rest of the article was good
on the contrary. I have been quite vocal about why I felt my education was lacking and the respect I have for those who have gone for nontraditional paths
There are three centres of "AI" gravity: GenAI, FAIR & RL-R
Fair is fucked, they've been passed about, from standalone, to RL-R then to "production" under industrial dipshit Cox. A lot of people have left, or been kicked out. It was a power house, and PSC (the 6month performance charade killed it)
GenAI was originally a nice tight and productive team. Then the facebook disease of doubling the team every 2 months took over. Instead of making good products and dealing with infra scaling issues, 80% of the staff are trying to figure out what they are supposed to be doing. Moreover most of the leadership have no fucking clue how to do applied ML. Also they don't know what the product will be. So the answer is A/B testing what ever coke dream Cox dreamt up that week.
RL-R has the future, but they are tied to either Avatars, which is going to bomb. It'll bomb because its run by an prick who wants perfect rather than deliverable. Moreover splats perform way better than the dumbarse fully ML end-to-end system they spend the last 15 billion trying to make.
Then there is the hand interaction org, which has burnt through not quite as much cash as Avatars, but relies on a wrist device that has to be so tight it feels like a fucking handcuff. That and they've not managed to deliver any working prototype at scale.
The display team promised too much and wildly underdelivered, meaning that orion wasn't possible as a consumer product. Which lets the write team off the hook for not having a comfortable system.
Then there is the mapping team who make research glasses that hovers up any and all personal information with wild abandon.
RL-R had lots of talent. But the "hire to fire" system means that you can't actually do any risky research, unless you have the personal favour of your VP. Plus, even if you do perfect research. getting it to product is a nightmare.
Is it a hot dog? Yes, yes it is.
14 BILLIES!
$14 billion for a glorified mechanical turk platform is bananas. Between this, the $6 billion Jonny Ive acquisition, and a ChatGPT wrapper for doctors called Abridge having a $5 billion valuation, this AI fraud bubble is making pets.com look like a reasonable investment.
We really need to start taxing these companies appropriately. Sad that the US treats business better then their own citizens. A business that goes bankrupt can shed their debt while a person goes bankrupt will retain their debt.
[0] https://arstechnica.com/gadgets/2024/07/alexa-had-no-profit-...
> UPDATE: ... much of the $14b did not go to Alexandr
why not change the title as well?
I’m confused at how Zuck has proven himself to be a particularly dynamic and capable CEO compared to peers. Facebook hasn’t had new product success outside of acquisitions in at least a decade. The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing. Meta Quest is a cash-bleeding joke of a side quest that Zuck thought justified changing the name of the company.
The kind of customer trustworthiness gap between Meta and competitors like Microsoft, Google, and Amazon is astounding, and I would consider it a major failure by Meta’s management that was entirely preventable. [1]
Microsoft runs their own social network (LinkedIn) and Google reads all your email and searches and they are somehow trusted more than twice as much as Meta. This trust gap actively prevents Meta from launching new products in areas that require trustworthiness.
Personally I don’t think Meta is spending $14B to hire a single guy, they’re spending $14B in hopes of having a stake in some other company that can make a successful new product - because by now they know that they can’t have success on their own.
[1] https://allaboutcookies.org/big-tech-trust
You don't need a 'new trick' when the main business is so frigging profitable and scalable. There is this myth that companies need to keep innovating. At a certain point, when the competition is so weak, you don't.
Tobacco stocks have been among the best investment over the past century even though the product never changed. Or Walmart--it's the same type of store, but scaled huge.
The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing.
Not really. Instagram is more popular than ever and much more profitable. It's not all or nothing .Both sites can coexist and thrive, like Pepsi and Coca-Cola.
But still, TikTok passed Instagram active users. This would be like if Universal theme parks surpassed Disney Parks in revenue. Sure, they can both coexist, but it’s an embarrassing reflection of mismanagement when a company that is so far ahead is surpassed by a competitor with better management working with fewer resources.
And there are a lot of examples like IBM, Kodak, and General Electric where the company appears untouchable with no need to innovate until suddenly they really do need to innovate and it all collapses around them.
It's like if a Universal theme park surpassed Disneyland Paris in revenue.
Meta has other properties too.
- Quest was an acquisition (not an original idea) and has been a money sink, VR market is declining.
- Instagram was an acquisition and as mentioned was surpassed in userbase by a newcomer despite cloning TikTok’s functionality relatively early.
- Facebook has a declining userbase. It’s still huge but it’s declining.
- Threads is the most impressive “original” new Meta product in terms of adoption but it’s about half the size of Twitter if not smaller, and growth is reported as slowing. It’s also mostly recruiting the same users who are already in Instagram.
- Meta Workplace got sunsetted and failed
I’m not betting against Meta as I acknowledge that they’re a cash cow, but I think that a better CEO for the company exists and could have done better. I think a better CEO would have beat TikTok, I think a better CEO would have Meta competing in more platforms than just social media, and a better CEO would have been able to maintain a better reputation for privacy and security or at least do a better job of separating Facebook’s reputation from the rest of the company. Something like Meta Workplace was doomed from the start at least in part because companies don’t trust Meta with their sensitive data. That is the CEO’s fault.
Wrong metric to evaluate dynamism and capability.
Also the Ray Ban Metas have been pretty successful. They consistently sell out
And what is the long-term revenue strategy for ray ban meta glasses? You buy the glasses and then what?
How so?
Disclosure: I work for one of the companies that fell behind.
The company that I work for survived that era by not doing those things.
They missed the boat on LLMs and have been playing catch up.
But even if they were, it's not immediately clear how they plan to make any money with having an open source model. So far, their applications of their AI, i.e. fake AI characters within their social media, are some of the dumbest ideas I've ever heard of.