1. Your github doesn't have anything in it, it is just a generic MCP server.
2. How does this differ from blender-mcp released by ahujasid several months ago? That one actually does have the complete source and has quite a following.
It is indeed a mcp server, but I have added some things that makes of different from being generic, it works smoothly, you can see from code.
And I am working on it, it is new and I am adding other this to it like generating 3js scenes, adding free blender asset apis, etc.
Happy if anyone else wants to contribute
The fade in effect when scrolling down is quite distracting, and makes reading the web page slower, because I have to wait for the text to appear. Yes, I have a fast computer.
It is also very choppy on my iPhone 16, not sure why.
Edit - I tried watching the demo, and it seems that on my phone the site is not usable, I can’t play the video, clicking on play does nothing and the page keeps scrolling and jumping
HN has been happily very rude about anything AI related the last year, even in cases here where it's hardly relevant or appropriate. It's depressing and I used to expect a lot better.
It takes more work to build a janky site than just no frills html/css unless you vibecoded it or copy and pasted a crappy template.
I think we should be allowed to push back against sloppy work (which is different from beginner work) instead of ingratiating it with a smile.
We have the rest of you to baby them over adding the worst css transitions I’ve ever seen, something they deliberately swerved into.
They are accused of vibe coding it only through charity because it’s hard to imagine they did it themselves and went “yup that’s exactly what I wanted after spending that extra time adding it.” Whether it’s vibe coded or not isn’t really the point.
Grading on perceived effort is not a rubric destined to last. You cannot detect sloppy work from beginner work without context, and in any case a lot of beginner work these days (and to some degree for the rest of time!) is going to include LLMs or AI.
Is HN only for advertising startups these days? If this post had nothing to do with AI maybe the response would have included some real genuine criticism and feedback, with the assumption baked-in that a beginner was being coached.
To your last point, then downvote it if it's bad. You're right and I agree precisely that it being vibe-coded wasn't the point - but it was brought up regardless. If the result is bad the feedback is still the same. If the "problem" is just that they used tools you don't agree with using, then that's not feedback on the result.
I do not think it has much to do with how fast your computer is, it is probably timed, e.g. from the CSS: "transition-duration: 0.3s". It is quite annoying.
Almost akin to:
- "How many CSS effects do you want?"
- "Yes".
:P
At any rate, the project is pretty cool. Everything is just one prompt away now (not really, but still!).
Hi, quick feedback: the demo is extremely short, so I can't really say much. Please generate more complicated scenes and, most importantly, inspect the wireframe. From what I could glance from the demo, the generated models are tri-based instead of quads, which would be a showstopper for me.
Because traditionally, Blender modeling works best on a clean quad-based mesh. Just look at any modeling tutorial for Blender and one of the first things you learn is to always keep a clean, quad-based topology, and avoid triangles and n-gons as much as possible, as it will make further work on the model more painful, if not impossible. That starts with simple stuff like doing a loop cut to things like uv-unwrapping and using the sculpting tools. It's also better for subdivision surface modeling. You can of course use tri-based models, but if you want to refine them manually, it's often a pain. Usually, for me it's pretty much a "take as-is or leave it" situation for tri-based meshes, and since I see these AI-created models more as a starting point rather than the finished product, having a clean quad-based topology would be very important for me.
Yes, because uv-unwrapping is much more predictable with quads, and you can place seams along edge loops. I'm by no means an expert here, maybe there are tools which make this similarly easy with non-quad topology, but at least from what I've learnt, the clean grids you get form quad-meshes are simply much easier to deal with when doing texturing.
An MPC server is not necessary, one can just API call LLM services directly from within Blender, and they already know Blender - the LLMs know it very well, it being open source and a gargantuan amount of data about it online in the form of tutorials and so on - all in foundation model training data.
Nice idea - I’m adding it to my list over at https://taoofmac.com/space/ai/mcp and will try it out later as I have been dabbling in Blender plugins myself.
I apologize for this extremely dumb question, but how is this a "server"? As far as I'm aware Blender is a local app. It can run without an internet connection. If an LLM wants to call into it, it needs to call its local python API.
Is this just unlucky naming or am I missing a critical piece?
MCP is a spec that is attempting to standardize a communication pattern for registering and calling tools from an llm. Part of the spec is a server that exposes specific JSON-RPC end points with a registry of the available tools, resources, and templates, and a way of executing them. That's the server, in this case the server acts as the interface into Blender.
The pipeline for LLM to MCP and to the app looks like,
LLM -> chat app -> MCP client -> MCP server -> specific app (Blender)
The chat app doesn’t know how to talk to Blender. It knows about MCP and links in a client. Blender exposes its functionality via a MCP server. MCP connects the two.
A server-client architecture can run on a single computer. You just need one piece of code to act as the "server" and one to act as the "client". Technically you don't even necessarily have to involve the networking stack, you can just communicate between processes.
External Prompt -> Claude -> MCP -> Blender -> Cycles -> .exr -> show Claude how good its work actually is -> Correct -> New prompt -> ... Rinse and repeat until result actually looks realistic.
Of course not, I would just ask for a MCP that watches the generated movie so I can use my time for more important matters, I just want the system to work by itself entirely, we could have these full consumerism silos and we just enjoy being called it's gods, but perhaps we could automate such egocentrism too.
I managed to do something like this directly in WebGL via threejs in Windsurf 2 weeks ago, you can see the resulting animation over here: https://infinite-food.com/ Also did an SVG animation and a globe with geopoints. So much easier than by hand...
1. Your github doesn't have anything in it, it is just a generic MCP server.
2. How does this differ from blender-mcp released by ahujasid several months ago? That one actually does have the complete source and has quite a following.
https://github.com/ahujasid/blender-mcp
https://news.ycombinator.com/item?id=43357112
And I am working on it, it is new and I am adding other this to it like generating 3js scenes, adding free blender asset apis, etc. Happy if anyone else wants to contribute
No prompts, no functions, nothing in the github repos.
https://github.com/pranav-deshmukh/blender-mcp/blob/main/add...
Edit - I tried watching the demo, and it seems that on my phone the site is not usable, I can’t play the video, clicking on play does nothing and the page keeps scrolling and jumping
Probably vibecoded slop.
I think we should be allowed to push back against sloppy work (which is different from beginner work) instead of ingratiating it with a smile.
We have the rest of you to baby them over adding the worst css transitions I’ve ever seen, something they deliberately swerved into.
They are accused of vibe coding it only through charity because it’s hard to imagine they did it themselves and went “yup that’s exactly what I wanted after spending that extra time adding it.” Whether it’s vibe coded or not isn’t really the point.
Is HN only for advertising startups these days? If this post had nothing to do with AI maybe the response would have included some real genuine criticism and feedback, with the assumption baked-in that a beginner was being coached.
To your last point, then downvote it if it's bad. You're right and I agree precisely that it being vibe-coded wasn't the point - but it was brought up regardless. If the result is bad the feedback is still the same. If the "problem" is just that they used tools you don't agree with using, then that's not feedback on the result.
Almost akin to:
- "How many CSS effects do you want?"
- "Yes".
:P
At any rate, the project is pretty cool. Everything is just one prompt away now (not really, but still!).
This is so sad to see animation hurting a good product.
I don't have Claude and no experience with MCP. How to use it with other tools such as LMStudio, ollama, etc?
And you can use free tier claude desktop or other open source llms
Is this just unlucky naming or am I missing a critical piece?
As in:
External Prompt -> Claude -> MCP -> Blender -> Cycles -> .exr -> show Claude how good its work actually is -> Correct -> New prompt -> ... Rinse and repeat until result actually looks realistic.
Well, now I know why "they" bother to digitally simulate my existence, and why movies are so terrible.
now who runs the AI?
...