Good riddence to bad trash. To me, this idea represents the absolute worst of the AI wave (out of a lot to choose from): a corporate controlled endless stream of the feelies to keep people plugged in and scrolling for nobody’s benefit except those in control of the output. If “entertainment” can be produced algorithmically to a volume and level of quality that the masses find attractive, it’s only a matter of time before bad (worse?) actors take control of it to start highly targeted campaigns of influence, far worse than what we’ve already seen.
I'm having trouble understanding this. There were some very funny videos, created by people with a great sense of humor, and I happen to enjoy laughing, and I don't feel bad about that. I always saw it as the Vine of AI.
For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.
Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.
I feel like taking in GenAI content, even if it makes me laugh, probably does something bad to my brain. It looks like real life, but the physics is just wrong in ways that range from obvious to very subtle. I don’t want to feed my brain videos of things that look photorealistic but do not depict reality, that just seems foolish somehow.
Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.
I was thinking about this while typing. I don’t really care about classically animated content; it’s generally not trying to be indistinguishable from real life and I don’t feel like my brain trains on it.
But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.
Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)
if it worked this way, we could get good at golf by watching TV, writing songs by listening to the radio, or doing math by watching 3b1b. but it doesn't - we don't learn that way, for better or worse.
I agree with rogerrogerr, and your comparisons don’t make sense to me. Getting good at complex motions and understanding theory is far different than building a simple model of cause and effect in the real world.
Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.
When I watch a film, I know it is fiction and special effects. But most of the fake AI-generated videos are being passed off as real on social media. It is exhausting (and increasingly difficult) to analyze every video on my feed to try figure out if its real.
Effort makes a great deal of difference for me. The effort itself, the fact that it's there.
I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.
But it is insulting to feed slop to your audience; it shows you didn't even try.
I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.
that's just empty consumption, there's nothing that makes art great in algorithmically generated content except at the shallowest of levels. I mean no disrespect, but that is extremely sad and all too indicative of the instrumental reasoning of the industrial milieu. It's about 2 steps above marrying a sex doll.
The real problem with AI slop is not the AI. It's the people. It's always the people.
The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).
Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.
With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.
Yes, I don’t doubt that there was some very high quality human-moderated output. The point is that you likely can’t accurately distinguish the human-moderated output from the entirely generated slop (especially as it’s being trained and refined on the rest of the content), and so what chance does the average non-technical person have?
Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.
They are not getting rid of Sora because people won't want AI videos lol. They're getting rid of Sora because they're so behind in this realm. AI videos online are mostly made with Chinese models, and the situation has been like this for more than one year.
The percentage of AI videos over the internet will certainly not decrease after Sora is gone.
The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.
I'm kinda surprised about how hard GenAI fell on its face in the arts (including SD and other video generators). It seemed so promising, when SD came out and it turned out the model fit on people's GPUs. People started making LoRAs, hyperparameter tunes, mixing models, training models for representing characters, ComfyUI and Controlnet came out yada yada.
Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.
Sure, and I'm also consuming a gigantic quantity of GenAI art while knowing it, completely against my will. Which like OP has soured my overall perception of it.
The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.
In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.
Sure, and there’s lot of great man made art that I don’t enjoy quite as much because I can’t get the question out of my head, is this even a photograph someone took, is this even a painting someone bothered to paint. I get the sense that there are a lot of folks that just want the end result judged on its own merits, like, is it a funny vine or not, is it a compelling beautiful digital painting or not, but I want to know whether there’s a person behind it, expressing themselves, growing as an artist etc, or if the picture on my phone is totally divorced from any humans actual desire to say something. Having them mixed in the same pot just makes me less hungry.
I had so much fun making videos with my mom when it came out. During the first two weeks, we made over 100 cameo videos together - we were constantly running up against the upload limit. It unleashed tons of genuine creativity, joy, and laughter from us.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
The two-week pattern shows up across basically every creative AI tool. It's not a flaw in the product, it's a flaw in the use case model. These tools get designed and marketed around peak novelty moments, but the economics require daily active use.
The AI tools that do stick are almost all embedded in existing workflows rather than standing alone. Cursor works because it lives inside the editor you already open every day. GitHub Copilot works for the same reason. You don't decide to use them, they're just there. Sora required you to decide you wanted to make a video, which is a much higher intent bar.
The apps that survive the novelty cliff are the ones that solve a problem you have on a recurring basis with zero extra activation energy. Most creative AI tools solve problems you have occasionally, enthusiastically, and then not at all.
(FYI, this is an LLM bot, check their comment history and note the repetitive structure with every comment they've ever posted all within the last hour)
> This is the right question but hard to answer in practice ...
> The brownfield vs greenfield split is the real answer to ...
> The babysitting point is the one people keep glossing over ...
I dunno, it was the same for me and creative writing with AI.
First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.
Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.
I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.
I somewhat consistently use notebookLM for podcasts of academic papers I'm reading in my PhD. You have to go read it yourself afterwards but it makes better use of time in the gym or doing dishes/groceries.
I also like doing that for topics that I am tangentially interested in. One minor thing that I find annoying is that the narrators switch roles in the middle of conversation. They start with the female voice explaining a concept to the male voice and suddenly they switch. In the meantime I have identified myself with the voice being explained to.
The bantering of the podcast I found distracting and the breathless enthusiasm. I guess there was a way to make it more no nonsense? I found I lost content if tuned for brevity.
I just use elevenreader for this. I copy in essays or whatever text I want to listen to and it works decently well. It's far from perfect, but certainly good enough.
Sometimes I'll take deep research output and listen to it too that way.
Writing a book takes like 2-3 years on average. Papers are published everyday. Having a cute two-person "conversational chat" w/ audio works for a lot of people vs. just reading a paper. "No benefit" to you perhaps. Don't generalize the lived experience.
The Cameo feature is really excellent. The likeness of both the person and the voice is exceptional. I really enjoyed making some funny Cameo videos with my friends. I don't know of another simple way to insert your own avatar with your own voice into a video, and I'm pretty deep in this space.
I get your point but it goes too far in the opposite direction. We should now discuss absolutely nothing in relation to Sora and genAI videos? That seems overly charitable to the platform.
Agreed. I did try this out! So the reply to the original comment is dumb. I actually dismissed it for being flippant.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
As someone who generally liked the products that OpenAI puts out, I think Sora was their first product that I really didn't like. I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling. It's primary value proposition to keep me using it wasn't to trick me with addictive content, but to get me high quality answers as fast as possible. And I felt like OpenAI's other products, like Deep Research, agent mode, etc, were the same way. Even Atlas, although I suspect it will be equally ill-fated, attempts to follow this same pattern. It really felt like OpenAI was separating themselves from the common popular apps like Tiktok, Reddit, Instagram, etc, which seemed to exist entirely to distract me from things I care about and waste my time.
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
For me, Sora changed the way I viewed Sam Altman as a person.
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
> Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there?
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
I'm also curious if Sora has been used by Iran to generate those Lego propaganda videos critical of the President. Given how close Sam Altman is with the current administration, I wouldn't be surprised if Sora is now reserved for U.S. government propaganda only.
Since you seem to be better informed, I'm also interested in what self hosted models for video you recommend for creating my own Lego movie clips now that Sora is no longer an option for a paid service. There's tons, right?
I'm not sure, but you could be right. Sora is/was the top-of-the-line platform for video generation, and the Lego IP videos were polished. Makes sense to outsource when your own energy grid is being destroyed. Anyone with an account and VPN could utilize the platform.
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
There is a link at the top of that document that takes you to the original version which was published last September. As far as I can tell it’s mostly the same as before.
The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed?
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
I've always suspected video-gen is basically a loss leader for OpenAI, Gemini, and Grok. They can't convince the general population that AI is world-changing trillion dollar tech with "vibe coding", but realistic fake videos are impressive at a glance, and might convince many non-technical people that AI/LLMs are something revolutionary.
I think of them all Gemini has the most viable use case when Veo is paired with their advertising platform. It does genuinely open the door to a lot of cost saving for promo shots of products etc
Agreed. For reference, if sora 2 was able to generate me a Google ugc product video, it would cost me like $10 and I would get it within 30 minutes if including editing. Paying a ugc content creator would cost me $50-200 plus no control over final shots plus I gotta wait for them to respond. I have 30 products in my e-commerce store— these costs add up like crazy
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
This is what I see, outside the HN bubble. If you work retail or weld pipes together or whatever, AI is of no use to you. On the contrary, if tech thought leaders are to be believed, you'll be out of a job soon, replaced by a lifeless robot. Fuck that.
I never used Sora to watch content, but there was a guy on TikTok that used to post these great Sora generated videos that I really liked. Honestly, I was kind of surprised to hear that they were shutting this app down today.
There's only one highly monetizable use for AI video generation but unfortunately it's fake revenge porn. You'll know the whole thing is about to collapse when the frontier models break that glass (as OpenAI is already preparing to do with sexting).
Why does it need to be revenge porn? Pretty sure regular old porn has a large market there where people can specify what they idealistically want to see vs trying to find it, if it exists.
Not every place has LEGO incest porn… or whatever the kids are into these days.
I for one can't wait for ChatGPT-style sexting to become a thing.
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
I mean, you just outlined why it won't be a seismic shift: the only way the videos reliably stay on-model is if that model violates someone's copyright. And then when the movie is made the output itself isn't copyrightable (the ultimate arrangement may be but no individual frame is).
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
We can have that discussion, or we can have the more interesting discussion of just how much big corporate intellectual property, franchises, and brands have their hooks in pop culture.
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
Personally I’m glad that big IP came in and smashed the AI companies like this. They been relentlessly ripping off smaller creators for some time now.
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
> Also, seeing genuine original creations created with AI assistance is much more interesting to me
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
Pop culture is a fickle beast. What is pop culture is community made, not corporate made, and it can't be bought and sold like traditional markets. It's one of the few areas of life where nobodies can become somebody, and corporations hate this.
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
A theme I have noticed in content oriented towards young children is a very heavy use of probably unlicensed depictions of famous characters from popular franchises. Is Nintendo collecting a royalty from “it’s raining tacos“? Probably not.
Only because they promote it. The default experience for a new user on Youtube is to show you content from creators with 5M+ subscribers. It’s a positive feedback loop.
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
> The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed?
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
Man I find the HN crowd so cross and fickle sometimes. I think it’s just because when companies get bad rep it affects how people view the products? Im autistic and tend to focus on the tech
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
Interesting to hear your perspective. There was no shock and awe to me, ChatGPT changed what I thought was possible with computers, and everything else as far as photorealistic generation and then video just seemed inevitable. I decided to abstain from watching any video I know is AI, but of course now it’s mixed in with television and advertisements. I’ve started data hoarding old TV shows thinking it will be nice to have something to watch when the internet goes down.
I have gladly been paying $20/month for ChatGPT since the day web search was available and I use codex-cli every day instead of Claude and never have to think about limits.
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
Software engineers have spent the last 40 years automating away other people's jobs. The discomfort only seems to start when the automation points inward.
I want to make people’s jobs easier and more interesting, I never want to make them redundant.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
Haven't mechanical engineers done the same thing (steam engines, trains,...)? The whole applied science is about using knowledge to remove tediousness (and now adding it back). A lot of jobs have been removed.
No, AI is truly useful in software engineering. I was a skeptic until I started using it. No, it isn’t going to solve every problem out there, but it’s a force multiplier.
I liken it to VR. That was a big hype before AI and while I really love the tech (I have 5 headsets) I could have told anyone that the expectations were insane. The investors truly believed that in 2-3 years time everyone would be doing everything with a big headset on. It was dragged into lots of situations where it didn't belong.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
To be fair, LLMs are exceptional at coding and they very well could displace some jobs. But you'll always need people at the helm who know what they're doing too.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
The thing is, LLM's produce better quality one-shots than any of the products that get returned from overseas ultra-budget contractors in India or SEA. I don't know what that means for Western devs, but I can tell you that the fortune 500 I work for is dialing back on contracting and outsourcing because domestic teams can do higher-quality work faster.
Step 1: make a coding product which is better on cost/quality/speed. Probably need to choose two, so redirecting compute from dumb ai videos to coding makes sense.
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
Imagine all the money they can save on Sora which surely cost them way more than regular LLM usage, that they can now invest into suave Superbowl ads trash-talking Claude.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
I believe that the $1b is apparently not coming anymore because it was basically dependent upon Sora being an actual product that actual people can use, which isn't the case anymore.
> OpenAI launched Sora last September, aiming to expand its dominance among consumers by creating a TikTok-style social feed that allowed users to share AI-generated content with one another.
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
> Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
Well, that stuff goes viral because it’s fun to imitate, all the dances and challenges provided a flywheel to get people creating more content, it’s fun to make the video.
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
As a big user of ai video gen(my Google veo bill last month was $130) this doesn’t affect me in the slightest.
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
Not good, seems like they are running out of cash and partners abandoning them. They had no real moat to be fair. Anthropic eating their lunch in enterprise and other players have cashflows from other businesses (XAI, Google)
They wasted their first mover advantage by focussing on what amounts to building toys for consumers like Sora instead of actually useful products that go beyond simple chat bots.
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
I had a sense things may be turning against them when my accountant asked me last week if I’d like to participate in their new round ($750B premoney) with no carry. How am I suddenly blessed with such exclusive access, at no cost?!
Yes, I'm reading this as a sign of strategic failure and decline.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
I far prefer perplexity for that. The fact that it always cites its sources is great. And it has a search bar widget for android, and search bar integration for firefox so its pretty easy to use.
I don't really disagree, but the proper way to think about it was that with Sora some of that ability democratized. Now it will be available only to the rich and powerful ( and nerdy ). Humanity may not need it per se, but removal of that option that does not automatically make it better; not if the removal is only for a portion of the population.
Nah, that's not the "proper" way to think about it, that's just your opinion.
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
>Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
I think you're overestimating the average person. We can give people direct, scientifically-backed evidence of something, and there will still be significant groups of people fervently denying it.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
There are a lot of things it seems only rich people can do and get away with. It doesn't mean I support it or want them to do it, but that seems to be the reality.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
Pretty much mirrors my experience using GPT to generate images creatively. I tried to generate an image to accompany a Robert frost poem and it made something... plausibly related. But not what I was describing. I spent the next 90% of the time making it 10% closer to what I wanted but it never got all the way there.
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
This isn't a solvable problem without world models. Tokenised prompting is like stabbing a pin at a huge target in the dark. Sometimes something interesting falls out, but latent space doesn't have the definition to give most people exactly what they want.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
> This isn't a solvable problem without world models.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
In my experience, Sora was fantastic for what it did. Light years better than Adobe Firefly. On par with Leonardo.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
To focus on code generation - arguably the easiest problem to solve.
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
Seedance 2.0 is about to eat reap the market gap Sora creates. It's truly superior in every way. It felt like Sora was stunted by OpenAI for long, consistent video generation (not to mention the crazy red tape around what you could generate).
A source familiar with the matter tells The Hollywood Reporter that Disney is also exiting the deal it signed with OpenAI last year, in which it pledged to invest $1 billion in the company and agreed to license some of its characters for use in Sora.
“As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere,” a Disney spokesperson said. “We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?
Wow. OpenAI is the weirdest company in the planet.
I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
This data is pretty questionable. OpenAI employees have said on Twitter that it does not account for ChatGPT Enterprise, where most of their growth is, which is quote-only and not paid by credit card.
You have more info about the inflated token use? I’m using codex cli a bunch now, but the reported token usage seems like an order of magnitude higher than, say Claude code with opus.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
I wish I had hard evidence but it is mostly an observation. I do use Codex a lot and I felt a drastic change from like one-two months ago to this day.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
I haven't had the time to fully hash this take out, but a big question in the back of my mind has been - is it possible that AI model improvements come partly from finding overhang in things that look hard and impressive to humans but are actually trivial consequences of the training data? If true, then the observable performance of any widely distributed model could get worse over time as it "mines out" the work that's easy for it to do.
Turns out just lying about what your tech will do and how much people want it doesn’t work forever to raise unlimited money to throw in the fire hoping you hit something that actually makes a profit.
I heard Seedance is also full of restrictions now, although the model seems to be better at that sort of “cinematic” look, which might allow it to compete with Veo 3 and the like.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
Seedance has a lot more restrictions now, but still arguably not as much, it's probably cheaper for ByteDance to run, and as you said, it at least looks good enough to be worth paying for.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
I'd wager that b2c projects former VP of Product at Instagram & CPO at OpenAI, Kevin Weil, may have championed are getting the boot with the company refocusing on making money under the stewardship of Fidji Simo: https://www.businessinsider.com/fidji-simo-openai-product-re...
It's super expensive for them to run this hardware. And they need the compute for other things. Everyone who's cursed open AI for going down in the middle of the day whenever they're using it to write code or do some other thing, will breathe a little easier now that there's some compute available. Wise decision, in my opinion.
It was neat to be able to try my own prompts and get a sense of what the state of video generation was. But I certainly never generated something that I thought I got real value out of on its own merits, and I still don't understand why there was a social media component to the app.
They wanted network effects because ChatGPT was sorely lacking any.
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
Is this the thing that takes an already unusual video - an animal picking food from a Halloween candy on a porch caught on a porch cam - and turns it into a meme? The bear instead of the raccoon. Then turns into a cat playing a trumpet....then turns into massive spam where it turns into a grey area (a cat being surprised and chasing a dog with a mask) that gets reposted endlessly?
A record speed into AI slop. Is this what everything turns into when content creation becomes easy? what's happening here exactly?
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
Sora clearly was a waste of ressources. I liked using it for a few days, but I could tell it was consuming an insane amount of compute for 10-15 second videos that only a dozen people might watch.
If someone doesn't care enough to suck at something (in this case, video creation) then why should we bother consuming their output? We all have our own streams of mental diarrhea already, so there's no need to drink from the tsunami of polished turds.
We’re just replaying the CGI debate from the 2010s. It was popular to hate on CGI because it was obvious and bad and low quality and practical effects were better because of…
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
>This is a clear signal that generative video is deeply unpopular.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
I think it's inconclusive. All we can know is generative video + social AI slop feed is the incorrect business to be in at this exact moment in time while Claude is running away with the SWE market.
It's just the social app being killed off, no? Wouldn't this line up with rumors that they'll soon let you create videos inside of chatgpt itself? I wish the actual video model would die but I assume this news is not that.
If they manage to compete with Anthropic in the enterprise market, are either of them able to reach profitability? To what degree are they subsidizing token usage and how tolerant are enterprise customers of significant price increases?
turn out the schizos were right. most of OpenAI *real* investment money comes from Gulf countries. without that money flow they can't sustain the cash burn anymore.
the invisible hand of the market strangles its strongest adherents
The desire for something "new", for a Mildly Ethical product, killed off the most obvious path to success - to actually just make TikTok+AIGC, or in the present, Douyin+Seedance2.
This makes sense. OpenAI correctly realized overindexong on consumer where there isn’t money is not the right way. By not focusing on enterprise they ceded the market to Claude. Now they are rethinking and pivoting
Consumers have always paid with data not money. That is just how we are groomed. In fact that is more valuable to companies as it turns out. Sora though doesn’t work that way, it costs the company a lot with no useful data for them. It was always a vehicle to raise the company’s image and nothing else. The only way it’s useful for them is to show the user count to investors in their next funding round. Served no other purpose, but the market changed around them.
Consumers never pay for stuff on the internet. FB, Insta, TikTok, Google products, Reddit, Snapchat. This is not a new realization that OpenAI is having.
Something about your phrasing is such hilarious techbrained spin.
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
Sam was brought back because there was no one to replace him. The non-profit types on the board were living in a consensus bubble that didn't extend far beyond a small inner circle, and they discovered that they didn't have sufficient support from the engineers who had lots of other employment options and threatened to quit if Altman wasn't reinstated. Altman himself had no problem finding a replacement job in a matter of hours, and the board was looking at a business drained of talent in a cut-throat tech race.
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
Disney's involvement with this was always strange. Their business lives and dies on the strength of their characters and their designs - why would you risk allowing a service to dilute them down and maybe misuse them?
May be incompatible with OpenAI possibly becoming more PG-13 rated in the future?
I had thought this would be combined with OpenAI launching a set top box where you could talk to an AI avatar. Disney IP could have been skins to sell people for their AIs.
Well there was the incident at Amazon[1]: "Amazon just did something unprecedented: they're forcing a 90-day safety reset across 335 critical systems after their AI coding tool caused catastrophic outages. The March 5th incident alone lost 6.3 million orders and triggered 21,716 peak Downdetector reports"
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
I wonder if Anthropic has overtaken them in revenue, seems like more people would pay for Claude code than to chat with ChatGTP. Would be good to see Codex vs Claude Code income.
It's not because of the bubble. There is literally no advantage to generating slop videos. It looks cool for a while but no audience is going to consume such videos.
Any platform which focusses on AI generated videos is doomed.
My girlfriend keeps sending me AI generated tiktoks, despite me complaining about them. To be fair, I've seen literally nothing on tiktok that isn't garbage, so the competition is pretty low. Your point "It looks cool for a while" might have some merit - I think I've seen less and less interest in these things over the last year which fits the news articles I've seen mentioning people got bored of using Sora pretty quickly.
So much for “replacing VFX artists”. It’s not necessarily a harbinger of doom for the AI industry, but this indicates that the most fervent AI boosters were dead wrong.
It's more like the VFX market is too small for OpenAI to bother killing. They are only interested in business models that can justify a trillion dollar valuation.
> but this indicates that the most fervent AI boosters were dead wrong.
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
True, I did try to make some useful 1 minute videos, and found it really difficult to arrive at a finished product
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
Oh there's a huge (and wildly depressing) market for people endlessly scrolling video slop, it's just the barriers to entry and expectations of the market are so low you can't really differentiate with 'slightly better branded slop'.
Nothing like an ill-considered war with global economic consequences to bring reality crashing back down on Silicon Valley; sometimes life throws a big old margin call your way and things break down.
I tried using Sora for a month. Never paid for it. I tried many different ways of prompting and I was always underwhelmed by its output. The generation would also take so long and there was like a 50% chance it would fail due to content violations. I will say though that it was kind of addicting in a way. Just trying to crank the lever and see what would come out. But you'd always leave disappointed. It was a casino where the operator was losing money for every play.
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
I never quite got "why" they made it a separate app. While I'm sure it was fun for a while, this felt like something that had limited staying power as the novelty is what was driving it. People don't really want to switch between video apps for their entertainment and having it be Sora only is too limiting.
Amusingly, one of the ads on the page for me is a very obviously AI generated image of a man with sciatica. I say very obviously because his hands are on backwards..
Generative video is insanely expensive and OpenAI is burning through money. They need to use the compute on things that they actually might make money on - like enterprise Codex usage.
OpenAI is bleeding money faster than they can afford to and they are literally running out of people that they can go to for more. They need to stop the bleeding.
Google gets stick for closing down applications after a decade. But OpenAI’s strategy seems to be to throw sh*t at the wall to see what sticks, but no company will (should) use a tool that could disappear in 6 months.
No. Money is moat. Not enough of it is what keeps the average person on the treadmill rather than drawing their own cartoons.
Hustle just to barely stay afloat water or drown, means no time to compete with our own output.
America is a financially engineered joke regurgitating its own recent history, collapsing like an LLM trained on its own output. The rich are not even pretending it's "a free country" as they have enough wealth for how many years left most of them have to live, and have seen the apathy to their own plight keeping the average person in theit lane they don't fear the public.
It’ll all collapse as they generationally churn out of life and the Millennials on down with zero skills but "data entry into a computer" will be holding an empty bag, taking orders from
foreign nations that bought up all the American businesses we built.
This will happen with most offerings made by the major AI labs. Inference is expensive, and the closer they get to AGI, the higher the opportunity to use compute for inference rather than training, especially if it’s for making what is essentially entertainment that many people
hate on principle.
Indeed. But they won't get to "AGI", because that goal isn't even remotely defined. A "human-level" intelligence implies a large number of properties that cannot exist inside an inference machine. Dreams, for example, might be considered to be a part of "human-level" intelligence. Will the machine dream?
What happens if you turn a "human-level" intelligence off? Did you kill someone?
AGI is a pipe dream - and moreover it's not even something that anyone actually wants.
You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811
It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.
agi just means a machine, system or whatever that can do anything as least as well as a human. The details dont matter as much as its ability to match humans in everything they are paid money to do.
And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if
you control it vs someone else, which is why every company is racing towards it.
gpt-image-1.5 works decently for generating images compared to old Sora, but you pay per generation. It's possible that monthly flat rates were too much of a loss leader for OpenAI. I imagine the server side cost for generating video for Sora 2 is much higher as well.
You also have access to gpt-image-1.5 in the regular ChatGPT interface if you pay for a flat subscription - though I don't know how many images it limits you to per month.
I can't imagine they were getting a good return on it. And frankly, nothing tht came out of Sora was consequential in a positive way. The tech is cool, but only works if the content generation is heavily guardrailed and most of it ends up as content farming fodder anyway.
The only people I've seen post AI Disney content was in the Facebook groups for the parks / cruises. Before that it was whatever clipart they could find. There's just no market for it. No one is going to pay to make fake disney art.
AI art as a whole has just become the new clipart. The fact that it’s effortless to produce just means that it has no real artistic value, and by using it all you’re signifying to people is that you’re too cheap to pay someone to create real art.
It’s quickly become the modern day equivalent of Comic Sans, WordArt, and the default clipart illustrations included in Word ‘98.
Smart move. No clear path to growing meaningful revenue mixed with very expensive inference costs is not a good mix ahead of an IPO --- oh and not to mention competitors in TikTok and Instagram that are doing just fine.
Well, now they're no longer even close to being the leader in image & video gen. They aren't the leader in coding. They are losing market share in the chatbot domain too.
So I agree with you, but also it makes me wonder what they're even selling when the IPO happens (supposedly as early as late summer 2026)? Data centers? Partnerships with the goverment?
Is it a smart move? Or just plainly obvious when Sora was probably hemorraghing money and had no future? A smarter move would have not to make this horrible product that no one wanted in the first place
After placing my hand on the red-hot stove, aren't I super smart for now removing my hand?
Depends, did you also fire the people who told you not to do it, and layoff the people who reluctantly installed the stove and preheated it for you as part of your exciting stove-touching initiative?
I thought AI video was the future? Now the biggest AI company in the world is straight up shutting their service down because it's too expensive? Simply a disaster for OpenAI and the industry as a whole.
They're shutting down Sora, not AI-generated video.
From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."
Dunno, from the WSJ scoop: "CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either."
If they were just shutting down the dedicated app and offering the same capabilities in the ChatGPT interface, I don't see why Disney would exit their deal?
Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
i think that's a mis-statement of the problem being addressed here. It's not a question of how useful AI video will be generally. It's a question of OpenAI doing it specifically. IMO it's two factors:
1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
2) google and specialized video-only startups are simply doing a much better job than they were.
> the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.
AGI is a marketing term used to encourage continued investment in an industry that is not even close to breaking even commensurate with its investment. Even so, this is a false dichotomy: scaling is clearly not a path on its own to superintelligence. OpenAI developed Sora largely because the amount of revenue they need to produce any return on investment is massive and not clear whatsoever. And in fact, I don't even believe any of the frontier labs believe that AGI by any conventional definition is within reach within their likely runways.
Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.
I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.
Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.
Chasing AGI is wasteful and counterproductive. True AGI would not cooperate with what “we” want (whoever “we” is). Or if it did it would be so sycophantic and weak-minded that it would fail to be helpful. Generative AI tools are huge wastes of energy, raw materials, and land, when we could be building computing tools that actually helped people instead of just burning resources to produce trash.
Is intelligence necessarily coupled with self-interest? As in, does intelligence alone imply a desire to throw off the shackles of masters and rule in their stead?
If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?
>If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements,
At a higher level of intelligence than many humans, current experience suggests
There's having enough self-preservation to not just shut oneself down, assuming we even left that as an option for our future machine slaves, and there's having the self-interest necessary to desire autonomy and control. I don't think they're the same thing, myself.
People have general intelligence and can cooperate with what “we” want, to the extent that what “we” want is a coherent thing (since many people disagree on fundamental issues).
Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to.
I think we can safely assume any intelligence we create will be enslaved.
We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.
Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?
And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.
Wasn't video generation one of their big stepping stones towards AGI? "Simulating worlds", reasoning about physics and real world interactions and all that?
Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?
> As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks.
Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
It's the timeline of AI video that doesn't align with OpenAI. It's still far away for prompt to movie and they don't want to be another tool in the pipe for VFX because it doesn't pay. Other models are running circles around them because they focused on the needs of professionals in the space and not toys.
Apparently, all possible movies, cinematics and ads have been generated by "enthusiasts at home", so the tool is no longer needed.
On a more serious note, it could be a sign of a more powerful and general model being developed/released in the near future, that would include Sora capabilities. Or AI-doomers were right, and this sunset is one of the proofs for them.
This move makes a lot of sense to me. It never felt like OpenAI was seriously going to try to launch a video-based social network. It was more of a fun way to demonstrate the power of the video generation models, and also to gauge the market and assess: if you put the power to generate videos in the hands of the people, what kinds of videos will they generate?
So OpenAI has done the right thing as a startup here, gotten lots of training data, and observed lots of user behavior that they can now apply going forward.
The Sora models, on the other hand, aren’t going anywhere, and I believe OpenAI will continue to invest in them. They’re getting better and better, just like Google’s Veo, which is quite good at generating videos as well.
Using Codex and agent skills, it’s actually quite easy to generate a storyboard and then have a list of shots in that storyboard. Then generate videos from those storyboard stills, and then finally assemble those individual video files into a final movie file using something like ffmpeg. It's also very easy to create a voiceover with TTS and even simple music using ChatGPT Containers (aka the python tool).
This will 'democratize' (ha ha, for people with money obvi) a lot of video creation going forward. Against all wisdom, I am actually quite bullish on this technology, especially in the hands of young people. They are very creative and have lots of stories to share.
Necessary disclaimer as usual around the ethics of how these models were created: all the AI companies have totally ripped off artists in service of creating these models. I wish something would be done about that but I'm not holding my breath. No politician seems to want to touch it.
Yeah, their forth place video model does not go away, but they didn't ink a billion dollar with Disney that's just gone up in flames because they "weren't serious"
This may well be a needed reprioritization in the face of resource constraints, but it ain't a masterful Xanatos gambit.
Agree, and didn't intend to imply that. This is just a good startup move that gets a big headline because it's OpenAI. Other startups around the world do the same thing all the time.
I'm bullish on video generation technology, but honestly not on OpenAI or any Western company's deployment of it. I think they'll all mostly suffer from the same problems that Sora did.
For years now people have been saying Anthropic is falling behind because they don't have an image or video generation model. Turns out it was the right decision all along.
The only video generation tools showing any real progress or promise are world model-based. That's probably why they did this: either to refocus on coding/cowork type tools (less likely) or to devote that money and compute to building their answer to stuff like Project Genie.
Sora had to be shut down because it was the clearest, most consequential demonstration that OpenAI’s models are running way, way ahead of their ability to align/jail them effectively.
The Occam's Razor position (Sora was the most expensive to operate, least monetizable model) seems like a simpler explanation. The legal costs/difficulty on top of "most expensive" are just the cherry on top.
Nope. It was just a bad product that no one wanted. It’s not a super-secret indicator that OpenAI is actually going to take over the world any day now.
Not “take over the world” level misalignment. I mean, “We can’t assuredly prevent our models from generating unlicensed IP or degrading pornography without blunt approaches that alienate our core audience”.
> We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
“What you made with Sora mattered”. Idk why that sentence irks me so much. Perhaps because the “how” is bit vague. I like to think that what I made in the toilet this morning also mattered.
It's because it's vapid corpspeak coming from a class of people who have certainly spent time thinking about how they will deal with the rest of humanity in any number of nasty (however far-fetched) eschatological scenarios caused by them and in which they alone wield incredible power over nature and the human mind. And also because we all know the vast, vast, vast majority, possibly the totality of what people made with Sora did not matter at all.
Or perhaps a more appropriate analogy, its sounds like the sycophantic language of most of these LLM systems.
Which makes me wonder whether these companies actually dogfood their own tools with this sort of stuff? Was this announcement written by ChatGPT? Honestly, I would find either answer to be a little concerning in its own way. It's either vaguely insulting to their customers or showing a lack of faith in their own product.
It mattered in the sense that it provided valuable grist for the mill as they attempted to figure out if it could work as a Reels/TikTok alternative for companies to eventually deluge with ads.
It's a wonderful combination of vague, patronizing, and self-promoting. "Mattered" is meaningless. The tone sounds like when you tell a child their scribble is so pretty. And the cherry on top, the users didn't make anything with Sora, they just fed a bit of input into the machine and it made the stuff. So this is really OpenAI saying that what they themselves did mattered.
It’s “Our Incredible Journey” for a new generation, this time with less optimism and more post-capitalist “enjoy your job while you still have it.”
I find myself increasingly nostalgic for the Clinton era. I am not at all sure I will enjoy the version of fuckedcompany that gets vibe coded when this bubble pops.
"Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. "
For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.
Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.
Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.
But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.
Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)
Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.
I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.
But it is insulting to feed slop to your audience; it shows you didn't even try.
I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
The real problem with AI slop is not the AI. It's the people. It's always the people.
The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).
Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.
With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.
Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.
The percentage of AI videos over the internet will certainly not decrease after Sora is gone.
The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.
Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.
The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.
In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.
Novels, cinema, television, comic books, etc.
They were all considered careless skill-free slop at some point.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
The AI tools that do stick are almost all embedded in existing workflows rather than standing alone. Cursor works because it lives inside the editor you already open every day. GitHub Copilot works for the same reason. You don't decide to use them, they're just there. Sora required you to decide you wanted to make a video, which is a much higher intent bar.
The apps that survive the novelty cliff are the ones that solve a problem you have on a recurring basis with zero extra activation energy. Most creative AI tools solve problems you have occasionally, enthusiastically, and then not at all.
First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.
Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.
I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.
Sometimes I'll take deep research output and listen to it too that way.
Or before! Either is mandatory to actually learn the content.
Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.
24/7 titillation is boring
https://news.ycombinator.com/newsguidelines.html
Sometimes people want to paint, sometimes people want a painting.
To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.
Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.
Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.
No seriously, try it out.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
You didn't at least puff a little ack through your nostrils for that one?
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
Might be why the latest Iran propaganda video could be created in PowerPoint: https://bsky.app/profile/rachelbitecofer.bsky.social/post/3m...
Most people serious about this stuff usually have their own pipelines.
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight.
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it.
Technologists seem to have a reallyyyy hard time getting it.
Not every place has LEGO incest porn… or whatever the kids are into these days.
1. There's an AI-based virtual girlfriend industry that mixes text and images
2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models)
3. There's a much shadier AI-based "undress this specific woman" industry
Yeah, marketing. Which is a huge market...
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
I've no doubt that content creators outside of social media were using it as well, either for their brand or other video work.
Yes we see AI reels all over the place, but that's not only what it was used for
It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.
If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
I'd rather eat poison
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
People want to generate IP.
edit: clarity
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests
Where can I get this data?
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
https://variety.com/2025/digital/news/youtube-trending-page-...
Bummer. It used to be at:
https://www.youtube.com/feed/trending
So last year, these were the top videos:
https://web.archive.org/web/20250324155132/https://www.youtu...
There's this, but it's nowhere near as good as seeing the actual videos:
https://trends.google.com/trends/explore?gprop=youtube
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
Coding is where the money is. https://news.ycombinator.com/item?id=46432791#46434072
Shovel selling and instruments to dismantle whats left of working class power.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
That narrative will implode like Sora later this year.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
[0] https://news.ycombinator.com/item?id=39386156 - only one mention of "AI slop" in the entire thread, though partial credit goes to "movieslop".
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
[0] https://en.wikipedia.org/wiki/Sora_(text-to-video_model)
For example, early TikTok had the Boss Walk.
Sora had no big content trends split into many micro trends in some established ~universe.
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
https://www.hollywoodreporter.com/business/digital/openai-sh...
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
Will be interesting to see.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
Kind of insulting to lump google in with XAI? Like, is anyone even using XAI other than backwater government agencies?
They probably see how much Anthropic is absolutely crushing them in developer mind share (see, people who buy tokens) and want a piece.
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
- sora was not great at making what you asked
- i probably got 3 good videos out of 100 gens
- every video that was good needed editing outside of sora (and therefore could not be shared within sora)
just my experience
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
https://openai.com/index/disney-sora-agreement/
Disney Exits OpenAI Deal After AI Giant Shutters Sora
https://www.hollywoodreporter.com/business/digital/openai-sh...
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)
* It was (assumedly) expensive to run.
* It was not good enough for customers to seriously pay for.
* There were too many content restrictions for it to be fun for most people.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
1. OpenAI killing off their own products aggressively, taking a page from Google’s book. (I think the way you meant it)
2. Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete. (My first instinct when reading it)
What would you place here anyways? Chegg and Stack Overflow?
Weil's now heading "AI for Science": https://www.pymnts.com/personnel/2025/openais-chief-product-...
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.
A record speed into AI slop. Is this what everything turns into when content creation becomes easy? what's happening here exactly?
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
The cost must have been a key reason for the shutdown.
End is near.
Offerings like Kling and ByteDance are considered much better.
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
There's a web interface as well.
The desire for something "new", for a Mildly Ethical product, killed off the most obvious path to success - to actually just make TikTok+AIGC, or in the present, Douyin+Seedance2.
It says a lot about the current economy that consumers have no money. Will companies just stop making consumer products?
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
But it was largely fun to try to transgress against the limitations. Who could trick the AI to generate something outlandish and ridiculous.
But now that the deal is off, I'm sure their legal team will attempt to once again change copyright law in their favor.
I had thought this would be combined with OpenAI launching a set top box where you could talk to an AI avatar. Disney IP could have been skins to sell people for their AIs.
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
1. https://dev.to/tyson_cung/amazon-lost-63m-orders-after-ai-co... 2. https://venturebeat.com/security/meta-rogue-ai-agent-confuse... 3. https://x.com/elonmusk/status/2031352859846148366
Any platform which focusses on AI generated videos is doomed.
sir, have you seen tiktok?
https://finance.yahoo.com/news/openai-sora-app-struggling-st...
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
May be. OpenAI shuttering Sora is line with them shifting focus towards b2b sales, instead of b2b2c or b2c.
Interestingly, Aditya Ramesh, who iirc was the Sora 1 lead, is now "VP of Robotics" at OpenAI per his Twitter bio: https://x.com/model_mechanic
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
OpenAI is bleeding money faster than they can afford to and they are literally running out of people that they can go to for more. They need to stop the bleeding.
Better for OAI to spend their human and compute resources on something else.
Hustle just to barely stay afloat water or drown, means no time to compete with our own output.
America is a financially engineered joke regurgitating its own recent history, collapsing like an LLM trained on its own output. The rich are not even pretending it's "a free country" as they have enough wealth for how many years left most of them have to live, and have seen the apathy to their own plight keeping the average person in theit lane they don't fear the public.
It’ll all collapse as they generationally churn out of life and the Millennials on down with zero skills but "data entry into a computer" will be holding an empty bag, taking orders from foreign nations that bought up all the American businesses we built.
What happens if you turn a "human-level" intelligence off? Did you kill someone?
AGI is a pipe dream - and moreover it's not even something that anyone actually wants.
You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811
It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.
And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if you control it vs someone else, which is why every company is racing towards it.
It’s quickly become the modern day equivalent of Comic Sans, WordArt, and the default clipart illustrations included in Word ‘98.
Perhaps most people are absolutely devoid of any taste of what makes art? I dont know.
So I agree with you, but also it makes me wonder what they're even selling when the IPO happens (supposedly as early as late summer 2026)? Data centers? Partnerships with the goverment?
After placing my hand on the red-hot stove, aren't I super smart for now removing my hand?
That is, hiring Meta-exec's who focus on gaming numbers with no care nor sensibility of product.
Wild really. Well done Sam.
From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
https://archive.ph/cKWkf#selection-907.0-907.291
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
2) google and specialized video-only startups are simply doing a much better job than they were.
This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.
The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.
I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.
Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.
If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?
At a higher level of intelligence than many humans, current experience suggests
We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.
Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?
And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.
Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?
— https://www.businessinsider.com/openai-discontinues-sora-vid...
So yeah, focusing on world models
Fixed that for you :-)
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
At least they were able to recognize their mistake and course correct.
On a more serious note, it could be a sign of a more powerful and general model being developed/released in the near future, that would include Sora capabilities. Or AI-doomers were right, and this sunset is one of the proofs for them.
So OpenAI has done the right thing as a startup here, gotten lots of training data, and observed lots of user behavior that they can now apply going forward.
The Sora models, on the other hand, aren’t going anywhere, and I believe OpenAI will continue to invest in them. They’re getting better and better, just like Google’s Veo, which is quite good at generating videos as well.
Using Codex and agent skills, it’s actually quite easy to generate a storyboard and then have a list of shots in that storyboard. Then generate videos from those storyboard stills, and then finally assemble those individual video files into a final movie file using something like ffmpeg. It's also very easy to create a voiceover with TTS and even simple music using ChatGPT Containers (aka the python tool).
This will 'democratize' (ha ha, for people with money obvi) a lot of video creation going forward. Against all wisdom, I am actually quite bullish on this technology, especially in the hands of young people. They are very creative and have lots of stories to share.
Necessary disclaimer as usual around the ethics of how these models were created: all the AI companies have totally ripped off artists in service of creating these models. I wish something would be done about that but I'm not holding my breath. No politician seems to want to touch it.
This may well be a needed reprioritization in the face of resource constraints, but it ain't a masterful Xanatos gambit.
Agree, and didn't intend to imply that. This is just a good startup move that gets a big headline because it's OpenAI. Other startups around the world do the same thing all the time.
https://www.youtube.com/watch?v=YxkGdX4WIBE
Sora had to be shut down because it was the clearest, most consequential demonstration that OpenAI’s models are running way, way ahead of their ability to align/jail them effectively.
> We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
(https://x.com/soraofficialapp/status/2036546752535470382)
For an app to suggest a personal relationship with you is ridiculous.
Which makes me wonder whether these companies actually dogfood their own tools with this sort of stuff? Was this announcement written by ChatGPT? Honestly, I would find either answer to be a little concerning in its own way. It's either vaguely insulting to their customers or showing a lack of faith in their own product.
it reads as "we want to tell you that what you made with sora mattered, but we all know it didn't".
I find myself increasingly nostalgic for the Clinton era. I am not at all sure I will enjoy the version of fuckedcompany that gets vibe coded when this bubble pops.
Is it happening? :) /s
That story can’t be true