The code is licensed [1] under the "Apple MIT" license [2], which is considered open-source. The weights are under a different, more restrictive, license. This is mentioned at the bottom of the README.
It effectively prevents the community from using Apple's solution, but gives the Chinese everything they need to duplicate the results and push their own version.
I expect a Hunyuan-branded version of this model in six months. Probably with lots of improvements.
I'm all for Chinese model takeover if this is how US tech giants treat AI. You can't horde the flames forever, US hyperscalers.
The DoD ought to be advocating for a strong domestic open source stance to ensure our ecosystem doesn't get washed away. AI czar David Sacks has this view, but I suppose it's been falling on deaf ears when the hyperscalers crowd out the conversation.
Meta’s campaign to corrupt the meaning of Open Source was unfortunately very successful and now most people associate releasing the weights with open source.
It's gratifying. I used to tilt at windmills on HN about this and people would be telling me with absolute condescension how the ship had sailed regarding the definition of Open Source, relegating my own life's work to anachronism.
People slowly waking up to how daft and hypecycle misusing a term was all along has been amazing.
The wildest one is how people say just because you produce open source software you should be happy that multibillion dollar corporations are leeching value from your work while not giving anything back but are in fact making your life harder. That’s the biggest piss on my back and tell me it’s raining bullshit I ever heard and makes me not want to open source a damn thing without feeling like a fool.
I think exactly like this. If I created a tool and it were used for free by billion dollar corporations to enrich themselves, I would consider it a personal loss.
so it wasn't a new campaign, it is at best re-appropriating the term open source in the software community in a way communities outside of software have always been using it, in a way that predates software at all, exists in parallel to the software community, and continues to exist now
In 30 years in tech, I have never once heard anyone use the term "Open Source" to refer to anything other than FOSS.
I have also never once heard anyone use the term FOSS outside of the written form.
So the opposite of what you said, I guess.
You also seem to be saying that the term "open source" existed before software did, so I feel compelled to ask: what do you think "source" stands for in "open source"?
The OSI definition and "open source purity" is designed by big tech to erode any value layer open source companies could use to threaten them.
New movements like "fair source", which is a form of source available + free use + expiring copyright is the ideal license. It's effectively unlimited use for customers and users, but has a baked in "non-compete" preventing low effort third parties from coming in and eating the market of managed services established by the authors.
We need to kill open source purity. Especially if we want to erode the hyperscalers that aren't even giving away the magic parts or valuable parts of their kingdoms.
Open source purity is a socialist dream while we're still living under the Empire. And it prevents the formation of a salient that can punch through. It's a very low suboptima.
I don't see any reason why you would want fair source authors to go "OSI" open other than taking their revenue stream as your own. The license bakes in contingencies in case the authors go out of business to open the license up for community ownership. That's good enough.
If these businesses were OSI open, the businessss become unsustainable and impossible to scale into something formidable that could chip away at entirely closed hyperscalers.
There's no reason to believe that weights are copyrightable. The only reason to pay attention to this "license" is because it's enforced by Apple, in that sense they can write whatever they want in it, "this model requires giving ownership of your first born son to Apple", etc. The content is irrelevant.
> The only reason to pay attention to this "license" is because it's enforced by Apple
Yes, but the most important reason to pay attention to ANY license for most people is because it is a signal for under what conditions the licensor is likely to sue you (especially in the US, which does not have a general “loser pays” rule for lawsuits), not because of the actual legality, because a lawsuit is a cost most people don’t want to bear while it is ongoing or cover the unrecoverable costs of once it is done, irrespective of winning and losing, and, on the other hand, few people care about being technically legal with their use of copyright protected material if there is no perceived risk of enforcement.
But even if that wasn’t true, and being sued was of no financial or other costs until the case is finally resolved, and only then if you lose, I wouldn't bet much, in the US, in the court system ultimately applying precedent in the most obvious way instead of twisting things in a way which serves the interest of the particular powerful corporate interests involved here.
> There's no reason to believe that weights are copyrightable.
I know this is a long, nuanced, ongoing discussion. I'm very interested in it, but haven't read up on it for years. Could you elaborate a bit on the latest?
I was always in the camp that opined that "weights" are too broad a term for any sensible discussions about conclusions like "are (not) copyrightable". Clearly a weight that's the average of its training data is not copyrightable. But also, surely, weights that are capable of verbatim reproduction of non-trivial copyrightable training data are, because they're just a strange storage medium for the copyright data.
It's probably just Apple layers avoiding getting involved in any copyright lawsuit over the copyrightability of weights, by avoiding licensing it except under what's clearly fair use anyway, making copyrightability moot.
Yes this seems more about protecting them from a lawsuit. I don’t think they actually give a shit about the weights or they wouldn’t release them at all. I suspect they just know they’re training dataset isn’t perfectly “clean” and don’t want to accept any more liability than they already have.
That is simply not true. The details might vary by jurisdiction and the protection might not be under the exact name of “copyright” but there most certainly are comparable legal protections for the contents of databases (“tables of numbers”). See for example: https://europa.eu/youreurope/business/running-business/intel...
> This. Tables of numbers are explicitly not subject to copyright; that’s a copyright 101 fact.
Ok, but there's clearly more nuance there. Otherwise I could claim that any mp3 file I wanted to distribute is just a table of 8-bit integers and therefore not subject to copyright.
I wanted to reply in this direction. Ultimately, literally everything and anything in SW is a sequence of numbers, that anybody could easily put in some kind of table form.
I don’t know where the catch is, but that sentence can not be true in general.
Disney would like you have a word with you. Why would their pile of numbers that represent Avatar3.m4a be any more subject to copyright than Apple_2D_3D.bin. Or GPT52.mlx or Opus45.gguf?
Not sure I've met one of those people in... a decade or so? Loving apple products has been an uphill road for a long time (and increasingly more so post-Jobs)
Pretty sure this is a joke, but the actual license is written by lawyers who know what they are doing:
> “Research Purposes” means non-commercial scientific research and
academic development activities, such as experimentation, analysis, testing
conducted by You with the sole intent to advance scientific knowledge and
research. “Research Purposes” does not include any commercial exploitation,
product development or use in any commercial product or service.
It kind of is, though. You use some input material to produce the weights via some process, even if the weights might not become exactly the same every time you reproduce the process; the production of the weights isn't done by working with the weights, but with the training material and the process to convert them into weights. The analogy to source code and the resulting binaries is there.
Training data and the weights produced are not source code, just as access to the resulting binaries are not a requirement for open source.
Open source does not require full working implementations. There's no requirement that a code snippet that I release be fully working and identical to a complete solution.
If all these AI models were trained on copyrighted materials for which the trainers had no right to, is it wrong to steal their models and use them however we want? Morally I'd say absolutely not, but I"m sure these AI bros would vigorously defend their own IP, even if it was built on stolen IP created by humans.
> If all these AI models were trained on copyrighted materials for which the trainers had no right to, is it wrong to steal their models and use them however we want?
If (which the courts seem to be pretty consistently finding) training models on copyright-protected works generally is fair use, though using models to produce works which would violate copyright if made by other means with reference to the source material is still a copyright violation, then training has no bearing on the legality of copying the models. (Even if it wasn't, then copying and using the models at all would violate the copyright of the original owners of the training material again and be illegal irrespective of the “license” offered by the model trainer.)
Morally? Well, pretty much the same dichotomy applies; if training the model isn't a violation of the source material's creators' rights, then the fact it was trained without permission has no bearing on the morality of using the model without the trainers permission, if it is a violation of the source material's creators' rights, then so is using the model irrespective of the trainer's “license”, as the trainer has no right to permit further use of the material they had no right to create.
The idea that the model is an intrusion on the rights of the creators of the materials used in training and that this makes use of the model more rather than less permissibly, legally or morally, takes some bizarre mental gymnastics.
Making 3D worlds like that is impressive. I used to build some VR worlds (hobby) and content generation is a huge time sink. I wonder if this tech will become accessible for that soon.
This is all going to become super accessible to everyone. And it'll become fast and eventually free.
Everyone will be able to flex their muscles as a creative. Everyone will be able to become an artist (expressing themselves though their unique lens) without putting points into a mechanical skill that is dimensionally orthogonal to idea expression and communication.
This is the "bicycle of the mind" that Steve Jobs talked about 40 some years ago. We've all had keyboards with which to express ourselves and communicate, but soon everyone will be able to visually articulate themselves and their thoughts. It's going to be so uplifting for society.
In fifty years we'll even be able to render our direct thoughts and mold them like clay. Share them directly with one another. Co-think.
Your daily reminder that neural network weights aren't creative work and as such aren't subject to copyright protection in the first place. The “license” is purely cosmetic (or rather, it has an internal purpose: it's being put there by the ML scientists who want to share their work and have to deal with the corporate reluctance to do so).
I don’t agree with this idea that for a model to be open source you have to be able to make a profit off of it. Plenty of open source code licenses doesn’t require that constraint
> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, [..]
Apparently licenses no longer have to actually meet all 10 of the criteria listed there to count as open source. OSI says AGPLv3 is open source, for example, even though it fails #10 ("No provision of the license may be predicated on any individual technology or style of interface").
AGPLv3 has provisions that are predicated on remote interaction over computer networks. Put modified AGPLv3 software on a computer that users interact with over RS232 terminals and you don't have to give users the source. Replace those RS232 terminals with X servers that let the users interact with the program over Ethernet and you do have to give those users the source.
The OSI (a consortium of cloud companies who benefit when you write nonfree software for them) is not actually an authority on what the words "open source" mean, no matter how hard they try to insert themselves into that role.
Models can't be open source anyway, because they don't have source.
While most people follow the OSD criteria, there is nothing that says open source software must follow it. Nor is the OSD the only set of criteria or the only definition.
Open source means the source is available. Anything else is just political.
> Open source means the source is available. Anything else is just political.
Where was that defined so? And most of all, given the domain of information technology, who understand open source to cover cases where the source is available ie. only for reviewing?
The purpose of words and terms is so that people can exchange ideas effectively and precisely, without needing to explain the terms every time from the grounds up. Having different groups having divergent definitions on the same words is counterproductive towards that goal. In my view, labeling a release "open source" with very big limitations on how the source is used is just not about marketing, it's miscommunication.
If "open source" and "source available" (and "open weights") mean the same thing, the how come people have come up with the two terms to begin with? The difference is recognized in official contexts as well, i.e. https://web.archive.org/web/20180724032116/https://dodcio.de... (search for "source available"; unfortunately linking directly doesn't seem to work with archive.org pages).
It doesn't seem there is any benefit in using less precise terms when better-defined ones are available.
Anything more than the definition is extra. In this case it's political. That doesn't require a definition.
> The purpose of words and terms is so that people can exchange ideas effectively and precisely, without needing to explain the terms every time from the grounds up.
Do you think the 10 criteria of a non-profit's opinion effectively conveys information without needing to explain the terms from the ground up?
> Having different groups having divergent definitions on the same words is counterproductive
Right, which is why the parent is wrong. It's just an organization's opinion
> If "open source" and "source available" (and "open weights") mean the same thing
They don't mean the same thing and I never claimed they do.
> It doesn't seem there is any benefit in using less precise terms when better-defined ones are available.
Use the more precise terms then. But you can't say the definition of open source is this 10 points of criteria that people disagree about....
> Open source means the source is available. Anything else is just political.
We don't have to have this debate again. Folks have tried this rhetorical tack so often there is an entire wikipedia page[1] dedicated to explaining the difference between source available and open source...
This is an opinion article and you should realize that opinions do not make definitions.
"Conversely, Richard Stallman argues the "obvious meaning" of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted"
See, here's another Wikipedia article with another opinion that disagrees, and RMS is obviously an authority.
That's source-available: you get to see the code and learn from it, but if you're not allowed to use it however you want (with as only common restrictions that you must then credit the creator(s) and also allow others the same freedom on derivative works) then it's not the traditional definition of open source
I'm not trying to be too pc, but you can't really tell based on someone's name where they were born.
That said, the US only has some 5% of the worlds population (albeit probably a larger proportion of the literate population), so you'd only expect some fraction of the world's researchers to be US born. Not to mention that US born is an even smaller fraction of births (2.5-3%, by Google), so you'd expect an even smaller fraction of US born researchers. So even if we assume that we're on par with peer countries, you'd only expect US born researchers to be a fraction of the overall research population. We'd have to be vastly better at educating people to do otherwise, which is a longshot.
Obviously this makes turning away international students incredibly stupid, but what are we to do against stupidity?
FWIW, many of the researchers on the paper did not study in the U.S. but immigrated after their PhD studies.
I checked the first, middle, and last author: Lars Mescheder got his PhD in Germany, Bruno Lecouat got his PhD in France, Vladlen Koltun got his PhD in Israel.
(Edit: or maybe they did not actually immigrate but work remote and/or in Europe)
Apple is also a global company and has offices and research labs world wide. At least a couple of the authors seem to work for Apple but at their German lab.
Why don't we produce enough experts in the US to saturate our tech companies?
It's because American education culture is trash. American parents are fine with their kids getting Bs and Cs. Mediocrity is rewarded and excellence is discouraged in our schools, both socially and institutionally.
Meanwhile you have hundreds of millions of foreign born children pulling out all the stops to do the best they possibly can at school precisely so they can get into the US and work at one of our top companies.
It was never even a competition. Immigrants and children of theirs will continue to outperform because it is literally baked into their culture - and it is baked out of ours.
It makes sense you're getting downvoted but I thought it was actually an interesting question so I spent the past hour or so doing an autistic rabbit hole (including finding the linkedins of the folks on the paper linked here to understand their backgrounds), heh.
Was somewhat surprised to learn that the pipeline wasn't built by industry demand, it was supply pressure from abroad that happened to arrive just as US universities needed the money (2009/10). In 1999, China's government massively expanded higher education, combined with a system where the state steers talent into stem via central quotas in the "gaokao", it created an overflow of CS capable graduates with nowhere to go domestically, India's 1991 liberalization created the IT services boom (TCS, Infosys, Y2K gold rush) and made engineering THE middle class ticket, so same overflow problem. US phd programs became the outlet for both countries.
In that light, the university side response probably wasn't state side industry demand for loads of PhDs, who was hiring those then? Google Brain didn't exist until 2011, FAIR until 2013. It wasn't really till 2012+ that industry in tech started to hire big research groups to actually advance the field vs specialized PhDs here and there for products... so not a huge amount of pull from there. Then, at the same time, universities were responding to a funding crisis... there was a 2008 state budget collapse, so it was backfilled with international Master's students paying $50-80k cash (we do this in Canada heavily also), that revenue cross-subsidized PhD programs (which are mostly cost centers remember). I also read some say PhD students were also better labor: visa constraints meant they couldn't easily bounce to industry, they'd accept $30k stipends, tho I saw other research contradicting this idea. The whole system was in place before "AI Researcher" was even a real hiring category. Then deep learning hit (2012), industry woke up, and they found a pre built pipeline to harvest: The authors on that Apple paper finished their PhDs around 2012-2020, meaning they entered programs 2009-2015 when CS PhDs were already 55-60% foreign born. Those students stayed, 75-85% of Chinese and Indian STEM PhDs are still here a decade later. They're now the senior researchers publishing papers you read here on HN.
This got me wondering, could the US have grown this domestically? In 2024 they produced ~3,000 CS PhDs, only ~1,100 domestic. To get 3,000 domestic you'd need 2.7x the pipeline...which traces back to needing 10.8 million 9th graders in 2018 instead of 4 million (lol), or convincing 3x more CS undergrads to take $35k stipends instead of $150k industry jobs. Neither happened. So other countries pay for K-12 and undergrad, capture the talent at PhD entry, keep 75%+ permanently.
Seems like a reasonable system emerged from a bunch of difficult constraints?
(and just to reiterate, even tho it was an interesting research project for me, you can't infer where someone is directly from based on their name)
The output is not automatically metrically scaled (though you can use postprocessing to fix this, it's not part of this model). And you can't really move around much without getting glitches, because it only inferences in one axis. It's also hard capped at 768 pixels + 2 layers.
Besides depth/splatting models have been around for quite a while before this. The main thing this model innovates on is inference speed, but VR porn isn't a use case that really benefits from faster image/video processing, especially since it's still not realtime.
This year has seen a lot of innovation in this space, but it's coming from other image editing and video models.
It's not for moving around, but for turning some image into a stereoscopic one (or 2 side-by-side images if you will). Lots of techniques for this exist, which usually turn an image into depth information using AI and then use any number of approaches to generate/warp 2 offset images from it.
So far the best looking results are still achieved with good old mesh warping and no inpainting at all. This may change that.
Ah, but if we're not talking 6DOF what's new with ml-sharp? We've had good autostereoscopy for a couple of years at least.
> So far the best looking results are still achieved with good old mesh warping and no inpainting at all.
I agree
> This may change that.
Seems not to be the case in my testing. The splats are too fine and sparse to yield an improvement. There are actually better (slower) image -> splat models than ml-sharp (with much higher dynamic range for the covariance) but I still don't use them over meshes for this.
The only improvements ml-sharp seems to add to the SOTA is 1) speed and 2) an interesting 2-focal layer architecture, but these are somewhat tangential steps.
I feel like being in a time loop. Every time a big company releases a model, we debate the definition of open source instead of asking what actually matters. Apple clearly wants the upside of academic credibility without giving away commercial optionality, which isn't unsurprising.
Additionally, we might need better categories. With software, flow is clear (source, build and binary) but with AI/ML, the actual source is an unshippable mix of data, infra and time, and weights can be both product and artifacts.
I'm glad you said it. Incredible tech and the top comment is debating licensing. The demos I've seen of this are incredible and it'll be great taking old photos (that weren't shot with a 'spatial' camera) and experiencing them in VR. I think it sums up the Apple approach to this stuff (actually impacting peoples lives in a positive way) vs the typically techie attitude.
You could use pixi instead, as a much nicer/saner alternative to conda: https://pixi.sh
Though in this particular case, you don't even need conda. You just need python 3.13 and a virtual environment. If you have uv installed, then it's even easier:
git clone https://github.com/apple/ml-sharp.git
cd ml-sharp
uv sync
uv run sharp
Perhaps they lived outside of the kingdom, with an evil Stepmother who moved very slow, struggled with complex dependency collisions, and took up a bunch of unnecessary space? Such an experience could leave one very traumatized towards Conda, even though their real problems are the unresolved issues with their stepmother…
I’ve been using some time off to explore the space and related projects StereoCrafter and GeometryCrafter are fascinating. Applying this to video adds a temporal consistency angle that makes it way harder and compute intensive, but I’ve “spatialized” some old home videos from the Korean War and it works surprisingly well.
Weird how “hugging face” is a heartwarming little smiley face, while “face hugger” is a terrifying alien xenomorph. Seems like there’s an analogy to be made there…
It seems like it, although the shipped feature doesn’t allow for as much freedom of movement as the demos linked here (which makes sense as a product decision because I assume the farther you stretch it the more likely it is to do something that breaks the illusion)
The “scenes” from that feature are especially good for use as lock screen backgrounds
I assume this is the same spatial scenes feature that was on visionOS prior to OS 26. In my experience that was really incredible. You could take a standard 2D photo of someone and suddenly you were back in the room with them.
It doesn't but it's pretty trivial to do if all you want is a pinholed mesh.
I managed to one-shot it by mixing in the mesh exporter from https://github.com/Tencent-Hunyuan/HunyuanWorld-Mirror but at that point you might as well use HWM, which is slower but much better suited to the level design use case.
Note that the results might not be as good as you expect, because this does not do any angled inpainting -- any deviation from the camera origin and your mesh will be either full of holes or warped (depending on how you handle triangle rejection) unless you layer on other techniques far outside the scope of this model.
And note that although HWM itself does support things like multi-image merging (which ml-sharp does not), in my testing it makes so many mistakes as to be close to useless today.
If you want something different that is designed for levels, check out Marble by World Labs.
I don’t know when Apple turned evil but hard for me to support them further after nearly four decades. Everything they do now is directly opposite of what they stood for in the past.
Apple trying to “open-source” something is pretty relevant. I don’t trust them at all. People constantly go at Microsoft but what Apple has done in the last 15 years is far worse. Their monopolies have had far worse impact than whatever Microsoft ever did with Windows and IE.
doesn’t WebKit also not support modern web standards on iOS despite being required for all mobile browsers on Apple? and in particular iOS intentionally does not properly support progressive web apps because they compete with their App Store with 30% fees?
I decidedly disagree with about everything you said regarding Microsoft. The Microsoft monopoly is the most life sucking cancer the corporate world has ever experienced. Compared to that the entire existence of Apple is merely a footnote. Don't mistake your stupid phone for the world.
I sunk my twenties involving the sh*tshow that was Microsoft antitrust. No, Microsoft shipping IE by default is pretty benign compared to what Apple has been doing for far longer than whatever Microsoft ever did. In fact, one can make an argument that Windows was really an open platform for developers based on Today’s standards.
I'm not talking about laughable little stunts like IE. I'm talking about the ongoing cancer that is eating up billions from little companies all the way to big corporations. All of that is ongoing, and they squeeze their prey for everything they have. They are the most disgusting and damaging disease you can imagine.
Once you start using even a small fraction of their tech it instantly metastasises throughout the entire organisation because of lock in and "open standards" that weirdly only work with their own tech. If the MS tech creates a problem the solution is to pour more MS tech onto the festering wound.
You apparently have been so insulated from how actual companies have to deal with tech that you think your little forays using computers are what everything should be measured by. All you have is a developer and hobbyist point of view.
Swift language is open source but the entire ecosystem is as closed as they get. The fact that no one is building anything outside of the ecosystem says everything about Swift and Apple’s intent. The fact that they still won’t support Linux on M chips also says they don’t care.
What would your definition of "instantly" be? I would argue that, compared to taking minutes or hours, taking less than a second is fast enough to be considered "instant" in the colloquial definition. I'll concede that it's not "instant" in the literal definition, but nothing is (because of the principle of locality).
> (...) Now, if I tell someone: "You should come to dinner more punctually; you know it begins at one o'clock exactly"—is there really no question of exactness here? because it is possible to say: "Think of the determination of time in the laboratory or the observatory; there you see what 'exactness' means"? "Inexact" is really a reproach, and "exact" is praise. (...)
Apple is not a serious company if they can't even spin up a simple frontend for their AI innovations. I should not have to install anything to test this.
Literally what this model does- create seemingly 3d scenes from 2d images, in the iOS photos app. It works even better when you take a real spatial image, which uses dual lenses.
This is a free research project on GitHub. I think I'd rather apple focus on making hardware than hoarding GPUs for PR stunts to prove they are a "serious company".
Ah great. Easier for real estate agents to show slow panning around a room, with lame music.
I guess there are other uses?? But this is just more abstracted reality. It will be innacurate just as summaried text is, and future peoples will again have no idea as to reality.
For panning you don't need a 3D view/reconstruction. This also allows translational camera movements, but only for nearby views. Maybe I am overly pedantic here, but for HN I guess thats appropriate :D
"Exclusively for research purposes" so not actually open source.
The only reference seems to be in the acknowledgement, saying that this builds ontop of open source software
[1] https://github.com/apple/ml-sharp/blob/main/LICENSE
[2] https://fedoraproject.org/wiki/Licensing/Apple_MIT_License
It effectively prevents the community from using Apple's solution, but gives the Chinese everything they need to duplicate the results and push their own version.
I expect a Hunyuan-branded version of this model in six months. Probably with lots of improvements.
I'm all for Chinese model takeover if this is how US tech giants treat AI. You can't horde the flames forever, US hyperscalers.
The DoD ought to be advocating for a strong domestic open source stance to ensure our ecosystem doesn't get washed away. AI czar David Sacks has this view, but I suppose it's been falling on deaf ears when the hyperscalers crowd out the conversation.
https://github.com/apple/ml-sharp/blob/main/LICENSE
Between this and the model's license, it seems like one is stuck with using this for personal use?
Though I'm sure they will shut their shop asap now that Nvidia basically bought them.
People slowly waking up to how daft and hypecycle misusing a term was all along has been amazing.
https://www.downloadableisnotopensource.org/
Open Source =/= free or software, just readable
so it wasn't a new campaign, it is at best re-appropriating the term open source in the software community in a way communities outside of software have always been using it, in a way that predates software at all, exists in parallel to the software community, and continues to exist now
I have also never once heard anyone use the term FOSS outside of the written form.
So the opposite of what you said, I guess.
You also seem to be saying that the term "open source" existed before software did, so I feel compelled to ask: what do you think "source" stands for in "open source"?
New movements like "fair source", which is a form of source available + free use + expiring copyright is the ideal license. It's effectively unlimited use for customers and users, but has a baked in "non-compete" preventing low effort third parties from coming in and eating the market of managed services established by the authors.
We need to kill open source purity. Especially if we want to erode the hyperscalers that aren't even giving away the magic parts or valuable parts of their kingdoms.
Open source purity is a socialist dream while we're still living under the Empire. And it prevents the formation of a salient that can punch through. It's a very low suboptima.
I don't see any reason why you would want fair source authors to go "OSI" open other than taking their revenue stream as your own. The license bakes in contingencies in case the authors go out of business to open the license up for community ownership. That's good enough.
If these businesses were OSI open, the businessss become unsustainable and impossible to scale into something formidable that could chip away at entirely closed hyperscalers.
Yes, but the most important reason to pay attention to ANY license for most people is because it is a signal for under what conditions the licensor is likely to sue you (especially in the US, which does not have a general “loser pays” rule for lawsuits), not because of the actual legality, because a lawsuit is a cost most people don’t want to bear while it is ongoing or cover the unrecoverable costs of once it is done, irrespective of winning and losing, and, on the other hand, few people care about being technically legal with their use of copyright protected material if there is no perceived risk of enforcement.
But even if that wasn’t true, and being sued was of no financial or other costs until the case is finally resolved, and only then if you lose, I wouldn't bet much, in the US, in the court system ultimately applying precedent in the most obvious way instead of twisting things in a way which serves the interest of the particular powerful corporate interests involved here.
I know this is a long, nuanced, ongoing discussion. I'm very interested in it, but haven't read up on it for years. Could you elaborate a bit on the latest?
I was always in the camp that opined that "weights" are too broad a term for any sensible discussions about conclusions like "are (not) copyrightable". Clearly a weight that's the average of its training data is not copyrightable. But also, surely, weights that are capable of verbatim reproduction of non-trivial copyrightable training data are, because they're just a strange storage medium for the copyright data.
What am I missing?
Any of the code that wraps the model or makes it useful is subject to copyright. But the weights themselves are as unrestricted as it gets.
Ok, but there's clearly more nuance there. Otherwise I could claim that any mp3 file I wanted to distribute is just a table of 8-bit integers and therefore not subject to copyright.
I don’t know where the catch is, but that sentence can not be true in general.
I'm going to match this energy whenever I see it.
https://github.com/apple/ml-sharp/blob/main/LICENSE
> “Research Purposes” means non-commercial scientific research and academic development activities, such as experimentation, analysis, testing conducted by You with the sole intent to advance scientific knowledge and research. “Research Purposes” does not include any commercial exploitation, product development or use in any commercial product or service.
Open source does not require full working implementations. There's no requirement that a code snippet that I release be fully working and identical to a complete solution.
I don't think any in this particular space (image-to-3d gaussian representation) are, but then this is the first model I’ve seen in that space at all.
If (which the courts seem to be pretty consistently finding) training models on copyright-protected works generally is fair use, though using models to produce works which would violate copyright if made by other means with reference to the source material is still a copyright violation, then training has no bearing on the legality of copying the models. (Even if it wasn't, then copying and using the models at all would violate the copyright of the original owners of the training material again and be illegal irrespective of the “license” offered by the model trainer.)
Morally? Well, pretty much the same dichotomy applies; if training the model isn't a violation of the source material's creators' rights, then the fact it was trained without permission has no bearing on the morality of using the model without the trainers permission, if it is a violation of the source material's creators' rights, then so is using the model irrespective of the trainer's “license”, as the trainer has no right to permit further use of the material they had no right to create.
The idea that the model is an intrusion on the rights of the creators of the materials used in training and that this makes use of the model more rather than less permissibly, legally or morally, takes some bizarre mental gymnastics.
I'm writing open desktop software that uses WorldLabs splats for consistent location filmmaking, and it's an awesome tool:
https://youtube.com/watch?v=iD999naQq9A
This next year is going to be about controlling a priori what your images and videos will look like before you generate them.
3D splats are going to be incredibly useful for film and graphics design. You can rotate the camera around and get predictable, consistent details.
We need more Gaussian models. I hope the Chinese AI companies start building them.
Everyone will be able to flex their muscles as a creative. Everyone will be able to become an artist (expressing themselves though their unique lens) without putting points into a mechanical skill that is dimensionally orthogonal to idea expression and communication.
This is the "bicycle of the mind" that Steve Jobs talked about 40 some years ago. We've all had keyboards with which to express ourselves and communicate, but soon everyone will be able to visually articulate themselves and their thoughts. It's going to be so uplifting for society.
In fifty years we'll even be able to render our direct thoughts and mold them like clay. Share them directly with one another. Co-think.
> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, [..]
AGPLv3 has provisions that are predicated on remote interaction over computer networks. Put modified AGPLv3 software on a computer that users interact with over RS232 terminals and you don't have to give users the source. Replace those RS232 terminals with X servers that let the users interact with the program over Ethernet and you do have to give those users the source.
You also need to explain how a set of terminals connected via RS232 does not constitute a "computer network".
Models can't be open source anyway, because they don't have source.
Open source means the source is available. Anything else is just political.
Where was that defined so? And most of all, given the domain of information technology, who understand open source to cover cases where the source is available ie. only for reviewing?
The purpose of words and terms is so that people can exchange ideas effectively and precisely, without needing to explain the terms every time from the grounds up. Having different groups having divergent definitions on the same words is counterproductive towards that goal. In my view, labeling a release "open source" with very big limitations on how the source is used is just not about marketing, it's miscommunication.
If "open source" and "source available" (and "open weights") mean the same thing, the how come people have come up with the two terms to begin with? The difference is recognized in official contexts as well, i.e. https://web.archive.org/web/20180724032116/https://dodcio.de... (search for "source available"; unfortunately linking directly doesn't seem to work with archive.org pages).
It doesn't seem there is any benefit in using less precise terms when better-defined ones are available.
Anything more than the definition is extra. In this case it's political. That doesn't require a definition.
> The purpose of words and terms is so that people can exchange ideas effectively and precisely, without needing to explain the terms every time from the grounds up.
Do you think the 10 criteria of a non-profit's opinion effectively conveys information without needing to explain the terms from the ground up?
> Having different groups having divergent definitions on the same words is counterproductive
Right, which is why the parent is wrong. It's just an organization's opinion
> If "open source" and "source available" (and "open weights") mean the same thing
They don't mean the same thing and I never claimed they do.
> It doesn't seem there is any benefit in using less precise terms when better-defined ones are available.
Use the more precise terms then. But you can't say the definition of open source is this 10 points of criteria that people disagree about....
We don't have to have this debate again. Folks have tried this rhetorical tack so often there is an entire wikipedia page[1] dedicated to explaining the difference between source available and open source...
[1]: https://en.wikipedia.org/wiki/Source-available_software
"Conversely, Richard Stallman argues the "obvious meaning" of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted"
See, here's another Wikipedia article with another opinion that disagrees, and RMS is obviously an authority.
https://en.wikipedia.org/wiki/Open_source#%22Open%22_versus_...
All of this is pointless so the common and accepted definition should be preferred. Which does not add extra political criteria for requirements.
Paper: https://arxiv.org/abs/2512.10685
That said, the US only has some 5% of the worlds population (albeit probably a larger proportion of the literate population), so you'd only expect some fraction of the world's researchers to be US born. Not to mention that US born is an even smaller fraction of births (2.5-3%, by Google), so you'd expect an even smaller fraction of US born researchers. So even if we assume that we're on par with peer countries, you'd only expect US born researchers to be a fraction of the overall research population. We'd have to be vastly better at educating people to do otherwise, which is a longshot.
Obviously this makes turning away international students incredibly stupid, but what are we to do against stupidity?
Approximately 96% of the world's population is not American, so you should expect that really.
I checked the first, middle, and last author: Lars Mescheder got his PhD in Germany, Bruno Lecouat got his PhD in France, Vladlen Koltun got his PhD in Israel.
(Edit: or maybe they did not actually immigrate but work remote and/or in Europe)
2. People who were born outside the United States but moved here to do research a while back don’t suddenly stop doing research here.
It's because American education culture is trash. American parents are fine with their kids getting Bs and Cs. Mediocrity is rewarded and excellence is discouraged in our schools, both socially and institutionally.
Meanwhile you have hundreds of millions of foreign born children pulling out all the stops to do the best they possibly can at school precisely so they can get into the US and work at one of our top companies.
It was never even a competition. Immigrants and children of theirs will continue to outperform because it is literally baked into their culture - and it is baked out of ours.
Was somewhat surprised to learn that the pipeline wasn't built by industry demand, it was supply pressure from abroad that happened to arrive just as US universities needed the money (2009/10). In 1999, China's government massively expanded higher education, combined with a system where the state steers talent into stem via central quotas in the "gaokao", it created an overflow of CS capable graduates with nowhere to go domestically, India's 1991 liberalization created the IT services boom (TCS, Infosys, Y2K gold rush) and made engineering THE middle class ticket, so same overflow problem. US phd programs became the outlet for both countries.
In that light, the university side response probably wasn't state side industry demand for loads of PhDs, who was hiring those then? Google Brain didn't exist until 2011, FAIR until 2013. It wasn't really till 2012+ that industry in tech started to hire big research groups to actually advance the field vs specialized PhDs here and there for products... so not a huge amount of pull from there. Then, at the same time, universities were responding to a funding crisis... there was a 2008 state budget collapse, so it was backfilled with international Master's students paying $50-80k cash (we do this in Canada heavily also), that revenue cross-subsidized PhD programs (which are mostly cost centers remember). I also read some say PhD students were also better labor: visa constraints meant they couldn't easily bounce to industry, they'd accept $30k stipends, tho I saw other research contradicting this idea. The whole system was in place before "AI Researcher" was even a real hiring category. Then deep learning hit (2012), industry woke up, and they found a pre built pipeline to harvest: The authors on that Apple paper finished their PhDs around 2012-2020, meaning they entered programs 2009-2015 when CS PhDs were already 55-60% foreign born. Those students stayed, 75-85% of Chinese and Indian STEM PhDs are still here a decade later. They're now the senior researchers publishing papers you read here on HN.
This got me wondering, could the US have grown this domestically? In 2024 they produced ~3,000 CS PhDs, only ~1,100 domestic. To get 3,000 domestic you'd need 2.7x the pipeline...which traces back to needing 10.8 million 9th graders in 2018 instead of 4 million (lol), or convincing 3x more CS undergrads to take $35k stipends instead of $150k industry jobs. Neither happened. So other countries pay for K-12 and undergrad, capture the talent at PhD entry, keep 75%+ permanently.
Seems like a reasonable system emerged from a bunch of difficult constraints?
(and just to reiterate, even tho it was an interesting research project for me, you can't infer where someone is directly from based on their name)
https://sccei.fsi.stanford.edu/china-briefs/highest-exam-how...
https://en.wikipedia.org/wiki/Economic_liberalisation_in_Ind...
https://ncses.nsf.gov/pubs/nsf24300/data-tables
https://www.aau.edu/newsroom/leading-research-universities-r...
https://ncses.nsf.gov/pubs/nsf25325
https://www.science.org/content/article/flood-chinese-gradua...
https://www.insidehighered.com/quicktakes/2017/10/11/foreign...
I'm not kidding. That's going to be >80% of the images/videos synthesized with this.
The output is not automatically metrically scaled (though you can use postprocessing to fix this, it's not part of this model). And you can't really move around much without getting glitches, because it only inferences in one axis. It's also hard capped at 768 pixels + 2 layers.
Besides depth/splatting models have been around for quite a while before this. The main thing this model innovates on is inference speed, but VR porn isn't a use case that really benefits from faster image/video processing, especially since it's still not realtime.
This year has seen a lot of innovation in this space, but it's coming from other image editing and video models.
So far the best looking results are still achieved with good old mesh warping and no inpainting at all. This may change that.
> So far the best looking results are still achieved with good old mesh warping and no inpainting at all.
I agree
> This may change that.
Seems not to be the case in my testing. The splats are too fine and sparse to yield an improvement. There are actually better (slower) image -> splat models than ml-sharp (with much higher dynamic range for the covariance) but I still don't use them over meshes for this.
The only improvements ml-sharp seems to add to the SOTA is 1) speed and 2) an interesting 2-focal layer architecture, but these are somewhat tangential steps.
Additionally, we might need better categories. With software, flow is clear (source, build and binary) but with AI/ML, the actual source is an unshippable mix of data, infra and time, and weights can be both product and artifacts.
Though in this particular case, you don't even need conda. You just need python 3.13 and a virtual environment. If you have uv installed, then it's even easier:
https://github.com/TencentARC/StereoCrafter https://github.com/TencentARC/GeometryCrafter
I’d be keen too.
The “scenes” from that feature are especially good for use as lock screen backgrounds
doesn't seem very accurate, no idea of the result with a photo of large scene, that could be useful for level designers
I managed to one-shot it by mixing in the mesh exporter from https://github.com/Tencent-Hunyuan/HunyuanWorld-Mirror but at that point you might as well use HWM, which is slower but much better suited to the level design use case.
Note that the results might not be as good as you expect, because this does not do any angled inpainting -- any deviation from the camera origin and your mesh will be either full of holes or warped (depending on how you handle triangle rejection) unless you layer on other techniques far outside the scope of this model.
And note that although HWM itself does support things like multi-image merging (which ml-sharp does not), in my testing it makes so many mistakes as to be close to useless today.
If you want something different that is designed for levels, check out Marble by World Labs.
You know Apple releases/funds a lot of open source, right?
Projects like WebKit, LLVM/clang, or CUPS (the print drivers for all of Linux)...
Once you start using even a small fraction of their tech it instantly metastasises throughout the entire organisation because of lock in and "open standards" that weirdly only work with their own tech. If the MS tech creates a problem the solution is to pour more MS tech onto the festering wound.
You apparently have been so insulated from how actual companies have to deal with tech that you think your little forays using computers are what everything should be measured by. All you have is a developer and hobbyist point of view.
"Less than a second" is not "instantly".
> (...) Now, if I tell someone: "You should come to dinner more punctually; you know it begins at one o'clock exactly"—is there really no question of exactness here? because it is possible to say: "Think of the determination of time in the laboratory or the observatory; there you see what 'exactness' means"? "Inexact" is really a reproach, and "exact" is praise. (...)
I guess there are other uses?? But this is just more abstracted reality. It will be innacurate just as summaried text is, and future peoples will again have no idea as to reality.
In fact you can already turn any photo into spatial content. I’m not sure if it’s using this algorithm or something else.
It’s nice to view holiday photos with spatial view … it feels like you’re there again. Same with looking at photos of deceased friends and family.