from Hacker News

Sora 2

by skilled on 9/30/25, 4:55 PM with 878 comments

Video: https://www.youtube.com/watch?v=gzneGhpXwjU

System card: https://openai.com/index/sora-2-system-card/

  • by the_duke on 9/30/25, 7:37 PM

    I haven't seen comments regarding a big factor here:

    It seems like OpenAI is trying to turn Sora into a social network - TikTok but AI.

    The webapp is heavily geared towards consumption, with a feed as the entry point, liking and commenting for posts, and user profiles having a prominent role.

    The creation aspect seems about as important as on Instagram, TikTok etc - easily available, but not the primary focus.

    Generated videos are very short, with minimal controls. The only selectable option is picking between landscape and portrait mode.

    There is no mention or attempt to move towards long form videos, storylines, advanced editing/controls/etc, like others in this space (eg Google Flow).

    Seems like they want to turn this into AITok.

    Edit: regarding accurate physics ... check out these two videos below...

    To be fair, Veo fails miserably with those prompts also.

    https://sora.chatgpt.com/p/s_68dc32c7ddb081919e0f38d8e006163...

    https://sora.chatgpt.com/p/s_68dc3339c26881918e45f61d9312e95...

    Veo:

    https://veo-balldrop.wasmer.app/ballroll.mp4

    https://veo-balldrop.wasmer.app/balldrop.mp4

    Couldn't help but mock them a little, here is a bit of fun... the prompt adherence is pretty good, at least.

    NOTE: there are plenty of quite impressive videos being posted, and a lot of horrible ones also.

  • by davidmurdoch on 10/1/25, 1:20 AM

    I just asked GPT 5 to generate an image of as person. I then asked it to charge the color of their shirt. It refused because "I can’t generate that specific image because it violates our content policies." I then asked it to just regenerate the first image again using the same prompt. It replied "I know this has been frustrating. You’ve been really clear about what you want, and it feels like I’m blocking you for no reason. What’s happening on my side is that the image tool I was using to make the pictures you liked has been disabled, so even if I write the prompt exactly the way you want, I can’t actually send it off to generate a new image right now."

    If I start a new chat it works.

    I'm a Plus subscriber and didn't hit rate limits.

    This video gen tool will probably be even more useless.

  • by mscbuck on 9/30/25, 11:19 PM

    I can't help but see these technologies and think of Jeff Goldblum in Jurassic Park.

    My boss sends me complete AI Workslop made with these tools and he goes "Look how wild this is! This is the future" or sends me a youtube video with less than a thousand views of a guy who created UGC with Telegram and point and click tools.

    I don't ever think he ever takes a beat, looks at the end product, and asks himself, "who is this for? Who even wants this?", and that's aside from the fact that I still think there are so many obvious tells with this content that make you know right away that it is AI.

  • by simonw on 9/30/25, 6:13 PM

    The main lesson I learned from the March ChatGPT image generation launch - which signed up 100 million new users in the first week - is that people love being able to generate images of their friends and family (and pets).

    I expect the "cameo" feature is an attempt at capturing that viral magic a second time.

  • by saguntum on 9/30/25, 7:15 PM

    I wonder if they're going to license this to brands for heavily personalized advertisement. Imagine being able to see videos of yourself wearing clothes you're buying online before you actually place the order, instead of viewing them on a model.

    If they got the generation "live" enough, imagine walking past a mirror in a department store and seeing yourself in different clothes.

    Wild times.

  • by rushingcreek on 9/30/25, 5:20 PM

    The most interesting thing by far is the ability to include video clips of people and products as a part of the prompt and then create a realistic video with that metadata. On the technical side, I'm guessing they've just trained the model to conditionally generate videos based on predetermined characters -- it's likely more of a data innovation than anything architectural. However, as a user, the feature is very cool and will likely make Sora 2 very useful commercially.

    However, I still don't see how OpenAI beats Google in video generation. As this was likely a data innovation, Google can replicate and improve this with their ownership of YouTube. I'd be surprised if they didn't already have something like this internally.

  • by btbuildem on 9/30/25, 10:59 PM

    They're really playing loose with copyright: you have to actively opt out for them to not use your IP in the generated videos [1]

    Tangentially related: it's wild to me that people heading such consequential projects have so little life experience. It's all exuberance and shiny things, zero consideration of the impacts and consequences. First Meta with "Vibes", now this.

    1: https://www.gurufocus.com/news/3124829/openai-plans-to-launc...

  • by kveykva on 9/30/25, 5:11 PM

    The example prompt "intense anime battle between a boy with a sword made of blue fire and an evil demon demon" is super clearly just replicating Blue Exorcist https://en.m.wikipedia.org/wiki/Blue_Exorcist
  • by samuelfekete on 9/30/25, 8:56 PM

    This is a step towards a constant stream of hyper-personalised AI generated content optimised for max dopamine.
  • by cogman10 on 9/30/25, 10:43 PM

    I've seen a lot of "this is impressive" but I'm not really seeing it. This looks to suffer from all the same continuity problems other AI videos suffer from.

    What am I looking at that's super technically impressive here? The clips look nice, but from one cut to the next there's a lot of obvious differences (usually in the background, sometimes in the foreground).

  • by gorgoiler on 9/30/25, 5:55 PM

    Impressively high level of continuity. The only errors I could really call out are:

    1/ 0m23s: The moon polo players begin with the red coat rider putting on a pair of gloves, but they are not wearing gloves in the left-vs-right charge-down.

    2/ 1m05s: The dragon flies up the coast with the cliffs on one side, but then the close-up has the direction of flight reversed. Also, the person speaking seemingly has their back to the direction of flight. (And a stripy instead of plain shirt and a harness that wasn’t visible before.)

    3/ 1m45s: The ducks aren't taking the right hand corner into the straightaway. They are heading into the wall.

    I do wonder what the workflow will be for fixing any more challenging continuity errors.

  • by TheAceOfHearts on 9/30/25, 7:59 PM

    Really impressive engineering work. The videos have gotten good enough that they can grab your attention and trigger a strong uncanny valley feeling.

    I think OpenAI is actually doing a great job at easing people into these new technologies. It's not such a huge leap in capabilities that it's shocking, and it helps people acclimate for what's coming. This version is still limited but you can tell that in another generation or two it's going to break through some major capabilities threshold.

    To give a comparison: in the LLM model space, the big capabilities threshold event for me came with the release of Gemini 2.5 Pro. The models before that were good in various ways, but that was the first model that felt truly magical.

    From a creative perspective, it would be ideal if you could first generate a fixed set of assets, locations, and objects, which are then combined and used to bring multiple scenes to life while providing stronger continuity guarantees.

  • by willahmad on 9/30/25, 6:01 PM

    I wonder about the implications of this tech.

    State of the things with doom scrolling was already bad, add to it layoffs and replacing people with AI (just admit it, interns are struggling competing with Claude Code, Cursor and Codex)

    What's coming next? Bunch of people, with lots of free time watching non-sense AI generated content?

    I am genuinely curious, because I was and still excited about AI, until I saw how doom scrolling is getting worse

  • by adidoit on 9/30/25, 6:07 PM

    Impressive tech. Don't love the likely societal implications.
  • by mempko on 9/30/25, 5:26 PM

    It's obvious there is no way OpenAI can keep videos generated by this within their ecosystem. Everything will be fake, nothing real. We are going to have to change the way we interact with video. While it's obviously possible to fake videos today, it takes work by the creator and takes skill. Now it will take no skill so the obvious consequence of this is we can't believe anything we see.

    The worst part is we are already seeing bad actors saying 'I didn't say that' or 'I didn't do that, it was a deep fake'. Now you will be able to say anything in real life and use AI for plausible deniability.

  • by baalimago on 10/1/25, 6:26 AM

    They can't even be consistent within their own launch video. Consistency is by far the biggest issue with generative AI. How can a professional studio work with scenes which has continuity errors on every single shot? And if it's not targeting professionals, who is it for?
  • by minimaxir on 9/30/25, 5:14 PM

    OpenAI apparently assumes that the primary users of Sora 2/the Sora app will be Gen Z, especially with the demo examples shown in the livestream. If they are trying to pull users from TikTok with this, it won't work: there's some nuance to Gen Z interests than being quirky and random, and if they did indeed pull users from TikTok then ByteDance could easily include their own image/video generators.

    Sora 2 itself as a video model doesn't seem better than Veo 3/Kling 2.5/Wan 2.2, and the primary touted feature of having a consistent character can be sufficiently emulated in those models with an input image.

  • by SeanAnderson on 9/30/25, 7:24 PM

    Sheeeeeeeeeeesh. That was so impressive. I had to go back to the start and confirm it said "Everything you're about to see is Sora 2" when I saw Sam do that intro. I thought there was a prologue that was native film before getting to the generated content.
  • by simonw on 9/30/25, 5:40 PM

    Anyone with access able to confirm if you can start this with a still image and a prompt?

    The recent Google Veo 3 paper "Video models are zero-shot learners and reasoners" made a fascinating argument for video generation models as multi-purpose computer vision tools in the same way that LLMs are multi-purpose NLP tools. https://video-zero-shot.github.io/

    It includes a bunch of interesting prompting examples in the appendix, it would be interesting to see how those work against Sora 2.

    I wrote some notes on that paper here: https://simonwillison.net/2025/Sep/27/video-models-are-zero-...

  • by haolez on 9/30/25, 6:52 PM

    One use that occurred to me is that fans will be able to "fix" some movies that dropped the ball.

    For example, I saw a lot of people criticizing "Wish" (2023, Disney) for being a good movie in the first half, and totally dropping the ball in the last half. I haven't seen it yet, but I'm wondering if fans will be able to evolve the source material in the future to get the best possible version of it.

    Maybe we will even get a good closure for Lost (2004)!

    (I'm ignoring copyright aspects, of course, because those are too boring :D)

  • by rd on 9/30/25, 5:17 PM

    https://apps.apple.com/us/app/sora-by-openai/id6744034028

    App link

    edit: CBN80W for an invite code

  • by mdrzn on 9/30/25, 5:16 PM

    If this is anything near the demo they have been released, this seems incredibly good at physics. Wow. Can't wait to try the new app.
  • by mempko on 9/30/25, 5:28 PM

    It's obvious there is no way OpenAI can keep videos generated by this within their ecosystem. Everything will be fake, nothing real. We are going to have to change the way we interact with video. While it's obviously possible to fake videos today, it takes work by the creator and takes skill. Now it will take no skill so the obvious consequence of this is we can't believe anything we see.

    The worst part is we are already seeing bad actors saying 'I didn't say that' or 'I didn't do that, it was a deep fake'. Now you will be able to say anything in real life and use AI for plausible deniability.

    I predict a re-resurgence in life performances. Live music and live theater. People are going to get tired of video content when everything is fake.

  • by stan_kirdey on 9/30/25, 6:25 PM

    That could totally power next generation of green-screen techs. Generative actors may not find favorable response in the audiences; but SFX, decor, extras, environments that react to actors' actions - amazing potential.
  • by etrvic on 10/1/25, 10:46 AM

    In light of some comments and videos here, I’d like to morbidly announce that I can no longer distinguish between AI videos and real ones. However, I’ll take this as an opportunity to move from short-form content to long-form, since it seems that space hasn’t yet been hijacked by AI.
  • by TechSquidTV on 10/1/25, 1:56 PM

    Not related to Sora but, I have been looking for / hoping for an AI powered motion tracking solver. I've used Blender and Mocha in AE and both still require quite a bit of manual intervention, even in very simple scenes.

    I saw some promnise with the Segment Anything model but I haven't seen anyone yet turn it into a motion solver. In fact I'm not sure if can do that at all. It may be that we need to use an AI algorithm to translate the video into a more simple rendition (colored dots representing the original motion) that can then be tracked more traditionally.

  • by jablongo on 9/30/25, 5:28 PM

    Sam Altman has made (for me) encouraging statements in the past about short-form video like TikTok being the best current example of misaligned AI. While this release references policies to combat "Doomscrolling and RL-sloptimization", it's curious that OpenAI would devote resources to building a social app based on AI generated short form video, which seems to be a core problem in our world. IMO you can't tweak the TikTok/YouTube shorts format and make it a societal good all of a sudden, especially with exclusively AI content. This is a disturbing development for Altman's leadership, and sort of explains what happened in 2023 when they tried to remove him... -> says one thing, does the opposite.
  • by aaroninsf on 9/30/25, 5:24 PM

    Someone who doesn't follow the moving edge would be forgiven for being confused by the dismissive criticism dominating this thread so far.

    It's not that I disagree with the criticism; it's rather that when you live on the moving edge it's easy to lose track of the fact that things like this are miraculous and I know not a single person who thought we would get results "even" like this, this quickly.

    This is a forum frequented by people making a living on the edge—get it. But still, remember to enjoy a little that you are living in a time of miracles. I hope we have leave to enjoy that.

  • by seydor on 9/30/25, 8:47 PM

    Since Agi is cancelled, at least we have shopping and endless video
  • by qoez on 9/30/25, 5:25 PM

    I know the comments here are gonna be negative but I just find this so sick and awesome. Feels like it's finally close to the potential we knew was possible a few years ago. Feels like a pixar moment when CG tech showed a new realm of what was possible with toy story
  • by minimaxir on 10/1/25, 1:15 AM

    This Sora 2 generation of Cyberpunk 2077 gameplay managed to reproduce it extremely closely, which is baffling: https://x.com/elder_plinius/status/1973124528680345871

    > How the FUCK does Sora 2 have such a perfect memory of this Cyberpunk side mission that it knows the map location, biome/terrain, vehicle design, voices, and even the name of the gang you're fighting for, all without being prompted for any of those specifics??

    > Sora basically got two details wrong, which is that the Basilisk tank doesn't have wheels (it hovers) and Panam is inside the tank rather than on the turret. I suppose there's a fair amount of video tutorials for this mission scattered around the internet, but still––it's a SIDE mission!

    Everyone already assumed that Sora was trained on YouTube, but "generate gameplay of Cyberpunk 2077 with the Basilisk Tank and Panam" would have generated incoherent slop in most other image/video models, not verbatim gameplay footage that is consistent.

    For reference, this is what you get when you give the same prompt to Veo 3 Fast (trained by the company that owns YouTube): https://x.com/minimaxir/status/1973192357559542169

  • by echelon on 9/30/25, 7:18 PM

    I'm a software engineer and hobbyist actor/director. My friends are in the film industry and are in IATSE and SAG-AFTRA. I've made photons-on-glass films for decades, and I frequently film stuff with my friends for festivals.

    I love this AI video technology.

    Here are some of the films my friends and I have been making with AI. These are not "prompted", but instead use a lot of hand animation, rotoscoping, and human voice acting in addition to AI assistance:

    https://www.youtube.com/watch?v=H4NFXGMuwpY

    https://www.youtube.com/watch?v=tAAiiKteM-U

    https://www.youtube.com/watch?v=7x7IZkHiGD8

    https://www.youtube.com/watch?v=Tii9uF0nAx4

    Here are films from other industry folks. One of them writes for a TV show you probably watch:

    https://www.youtube.com/watch?v=FAQWRBCt_5E

    https://www.youtube.com/watch?v=t_SgA6ymPuc

    https://www.youtube.com/watch?v=OCZC6XmEmK0

    I see several incredibly good things happening with this tech:

    - More people being able to visually articulate themselves, including "lay" people who typically do not use editing software.

    - Creative talent at the bottom rungs being able to reach high with their ambition and pitch grand ideas. With enough effort, they don't even need studio capital anymore. (Think about the tens of thousands of students that go to film school that never get to direct their dream film. That was a lot of us!)

    - Smaller studios can start to compete with big studios. A ten person studio in France can now make a well-crafted animation that has more heart and soul than recent by-the-formula Pixar films. It's going to start looking like indie games. Silksong and Undertale and Stardew Valley, but for movies, shows, and shorts. Makoto Shinkai did this once by himself with "Voices of a Distant Star", but it hasn't been oft repeated. Now that is becoming possible.

    You can't just "prompt" this stuff. It takes work. (Each of the shorts above took days of effort - something you probably wouldn't know unless you're in the trenches trying to use the tech!)

    For people that know how to do a little VFX and editing, and that know the basic rules of storytelling, these tools are remarkable assets that compliment an existing skill set. But every shot, every location, every scene is still work. And you have to weave that all into a compelling story with good hooks and visuals. It's multi-layered and complex. Not unlike code.

    And another code analogy: think of these models like Claude Code for the creative. An exoskeleton, but not the core driving engineer or vision that draws it all together. You can't prompt a code base, and similarly, you can't prompt a movie. At least not anytime soon.

  • by Awesomedonut on 10/1/25, 6:39 PM

    Their anime vid gen is really, really impressive. The results I've seen aren't /good/ from an industry-standard (nothing compared to the likes of the Demon Slayer movie I watched in theatres recently), but I legitimately couldn't tell that it was AI-generated. Massive step up from Sora 1 and other vid gen models.

    Here's to hoping that the industry will adapt to have it aid animators for in-betweening and other things that supplement production. Anime studios are infamously terrible with overworking their employees, so I legitimately see benefits coming from this tool if devs can get it to function as proper frame interpolation (where animators do the keyframes themselves and the model in-betweens).

  • by msp26 on 9/30/25, 5:13 PM

    The voice quality in the generated vids is surprisingly awful.
  • by modeless on 9/30/25, 5:28 PM

    I can see it being interesting to create wacky fake videos of your friends for a week or two, but why would people still be using this next year?

    I watch videos for two reasons. To see real things, or to consume interesting stories. These videos are not real, and the storytelling is still very limited.

  • by mempko on 9/30/25, 5:31 PM

    I predict a re-resurgence in life performances. Live music and live theater. People are going to get tired of video content when everything is fake.
  • by polishdude20 on 9/30/25, 7:22 PM

    There's something about the faces that looks completely off to me. I think it's the way the mouth and whole face moves when they talk.
  • by neom on 9/30/25, 7:12 PM

    Going to be an amazing source of training data, wait till they get it to real time and people are leaving their video camera open for AR features. OpenAI is about to have a lot of current real world image data, never mind the sentiment analysis.
  • by causal on 9/30/25, 5:16 PM

    IDK if the site is being hugged to death but I can only load the first video. Even in just one viewing there were noticeable artifacts, so my impression is that Veo is still in the lead here.
  • by darkwater on 9/30/25, 7:07 PM

    Last famous words:

    > A lot of problems with other apps stem from the monetization model incentivizing decisions that are at odds with user wellbeing. Transparently, our only current plan is to eventually give users the option to pay some amount to generate an extra video if there’s too much demand relative to available compute. As the app evolves, we will openly communicate any changes in our approach here, while continuing to keep user wellbeing as our main goal.

  • by neilv on 9/30/25, 8:04 PM

    > And we're introducing Cameo, giving you the power to step into any world or scene, and letting your friends cast you in theirs.

    How much are they (and providers of similar tools) going to be able to keep anyone from putting anyone else in a video, shown doing and saying whatever the tool user wants?

    Will some only protect politicians and celebrities? Will the less-famous/less-powerful of us be harassed, defamed, exploited, scammed, etc.?

  • by Aeolun on 9/30/25, 10:53 PM

    Clicking a link on the OpenAI dashboard and beeing greeted with a full page of scandily clad women was certainly not what I expected to see when opening Sora..
  • by d--b on 9/30/25, 6:45 PM

    Ok that's technically really impressive, and probably totally unusable in a real creativity context beyond stupid ads and politically-motivated deepfakes.
  • by dagaci on 9/30/25, 6:26 PM

    Amazing. iOS only, with region restrictions in 2025.
  • by sys32768 on 9/30/25, 7:21 PM

    I welcome a world where gullible people begin to doubt everything they see.
  • by jug on 9/30/25, 11:09 PM

    I feel so bad for the climate now.
  • by jsnell on 9/30/25, 5:25 PM

    Doing this as a social app somehow feels really gross, and I can't quite put to words why.

    Like, it should be preferable to keep all the slop in the same trough. But it's like they can't come up with even one legitimate use case, and so the best product they can build around the technology is to try to create an addictive loop of consuming nothing but auto-generated "empty-calories" content.

  • by nycdatasci on 9/30/25, 9:20 PM

    What makes TikTok fun is seeing actual people do crazy stuff. Sora 2 could synthesize someone hitting five full-court shots in a row, but it wouldn’t be inspiring or engaging. How will this be different than music-generating AI like Suno, which doesn't have widespread adoption despite incredible capabilities?
  • by DetroitThrow on 9/30/25, 5:17 PM

    Just seeing the examples that I assumed are cherry picked, it seems like they're still behind on Google when it comes to video generation, the physics and stylized versions of these shots seem not great. Veo3 was such a huge leap and is still ahead of many of the other large AI labs.
  • by clgeoio on 9/30/25, 8:49 PM

    > Concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds are top of mind—here is what we are doing about it.

    > We are giving users the tools and optionality to be in control of what they see on the feed. Using OpenAI's existing large language models, we have developed a new class of recommender algorithms that can be instructed through natural language. We also have built-in mechanisms to periodically poll users on their wellbeing and proactively give them the option to adjust their feed.

    So, nothing? I can see this being generated and then reposted to TikTok, Meta, etc for likes and engagement.

  • by robotsquidward on 9/30/25, 7:19 PM

    It's insanely impressive. At the same time, all these videos all look terrible to me. Still get extreme uncanny valley and literally makes me sick to my stomach.
  • by joshdavham on 9/30/25, 6:10 PM

    Will something like Sora 2 actually be used in Hollywood productions? If so, what types of scenes?

    I imagine it won’t necessarily be used in long scenes with subtle body language, etc involved. But maybe it’ll be used in other types of scenes?

  • by wantering on 10/3/25, 2:34 AM

    Latest Invite Codes Community-shared Sora invite codes updated in real-time

    https://sorainvitecode.org/

  • by FullMetul on 9/30/25, 10:20 PM

    Maybe by Sora 3 they will have scene consistency. Gah it's so jarring to me that the poll the racing ducks are in just randomly changes. My brain can tell it's not consistent scene to scene and feels so jank.
  • by mavamaarten on 10/1/25, 11:22 AM

    Ugh. While technically extremely impressive, I'm so tired of the slop. Every AI content generation tool should have a watermarking system in place, and sites like YouTube should have a way to filter out AI generated content from search results with the press of a button.

    Ever since the launch of Veo, there's already so much AI slop videos on YouTube that it becomes hard to find real videos sometimes.

    I'm tired, boss.

  • by tminima on 10/1/25, 7:59 AM

    I feel that this is a data collection activity (and thus, more advanced future models and usecases) disguised as a social media. People will provide feedback in the form of clicks/views on AI generated content (better version of RLHF) on unverified/subjective domains.

    Biggest problem OpenAI has is not having an immense data backbone like Meta/Google/MSFT has. I think this is step in that direction -- create a data moat which in turn will help them make better models.

  • by jack_riminton on 10/1/25, 9:28 AM

    Lets take a step back and realise how incredible this is (I'm sure there are plenty of other `ackshually` comments)

    Can it do Will Smith eating spaghetti? (I can't get access in UK)

  • by elpakal on 10/1/25, 2:28 AM

    Wish I was cool enough to have an invite code. Oh well, as an iOS build nerd next best thing I can do is inspect their ipa I guess. Interesting that they have some pretty big duplicate mp4s nobody caught in NoFaceDesignSystemBundle: cameo_onboarding_0.mp4 & create_ifu_1.mp4 | 7.3MB and cameo_onboarding_2.mp4 & create_ifu_0.mp4 | 5.2MB.

    Also I find it neat that they still include an iOSMath bundle (in chatGPT too), makes me wonder how good their models really are at math.

  • by outlore on 9/30/25, 5:49 PM

    in a computer graphics course i took, we looked through how popular film stories were tied to the technical achievements of that era. for example, toy story was an story born from the new found ability to render plastics effectively. similarly, the sora video seems to showcase a particular set of slow moving scenes (or when fast, disappearing into fluid water and clouds) which seem characteristic of this technology at the current moment in time
  • by bgwalter on 9/30/25, 8:05 PM

    What is the target market for this? The videos are not good enough for YouTube. They are unrealistic, nauseating and dorky. Already now any YouTube video that contains a hint of "AI" attracts hundreds of scathing comments. People do not want this.

    Let me guess, the ultimate market will be teenagers "creating" a Skibidi Toilet and cheap TikTok propaganda videos which promote Gazan ocean front properties.

  • by bamboozled on 9/30/25, 11:18 PM

    Soon, you won't even have to do anything to post a video of yourself doing something "interesting" on social media, what at time to be alive.

    There would for sure be large swathes of people who would just lie about what they're doing and use AI to make it seem like they're skateboarding, or skiing or whatever at a pro or semi-pro level and have a lot of people watch it.

  • by intended on 9/30/25, 6:06 PM

    That dragon flew backwards at one point didnt it.

    Impressive that THAT was one of the issues to find, given where we were at the start of the year.

  • by wltr on 10/1/25, 4:37 AM

    From watching the video I have an impression that these guys just want to appear cool, and the product looks like that too. To appear to be very cool, for people who won’t ever use it, apparently. Same impression I’ve got from watching that promo with Jony Ive. Beautiful, and don’t you dare to think it through.
  • by fariszr on 9/30/25, 5:16 PM

    Did they make human voices sound robotic on purpose? Is that some kind of Ai fingerprinting? It's way too obvious
  • by IncreasePosts on 9/30/25, 6:21 PM

    It's fitting that they host the video on Youtube, since that is where all of their training data came from.
  • by ElijahLynn on 9/30/25, 6:01 PM

    "download the Sora app"

    click

    takes me to the iPhone app store...

  • by ascorbic on 9/30/25, 6:50 PM

    This is super cool and fun and will almost certainly be really bad for society in loads of different ways. From the descriptions of all the guardrails they're needing to put in it seems like they know it too.
  • by alberth on 9/30/25, 8:51 PM

    Why do you have to download an app to use Sora 2 (vs it being available on the web like ChatGPT)?
  • by anshumankmr on 10/1/25, 1:34 AM

    I think someone had called it many months back (and in fact I felt it too) that the feed for Sora seemed very much like a social media app. Then the only thing left was to make it into vertical scrolling with videos and voila you have your tiktok clone.
  • by gvv on 9/30/25, 5:23 PM

    Any idea if or when it will be available in EU? https://apps.apple.com/us/app/sora-by-openai/id6744034028

    edit: as per usual it's not yet...

  • by jp57 on 9/30/25, 8:14 PM

    Prediction: we'll see at least one Sora-generated commercial at the Super Bowl this year.
  • by sumeruchat on 9/30/25, 7:18 PM

    Shameless plug but I am creating a startup in this space called cleanvideo.cc to tackle some of the issues that will come with fake news videos. https://cleanvideo.cc
  • by ashu1461 on 9/30/25, 8:43 PM

    This is a good comparison thread of capabilities of sora vs sora 2

    https://x.com/mattshumer_/status/1973085321928515783

  • by vahid4m on 9/30/25, 8:18 PM

    While the quality of what I'm seeing is very nice for AI generated content (I still can't believe it) but the fact thay they are mostly showing short clips and not a long connected consistent video makes it less impressive.
  • by Gnarl on 10/1/25, 9:18 AM

    Amazing that even Sora2 can't make Sam Altman not look like a w@nker.
  • by doikor on 9/30/25, 7:38 PM

    Does this survive panning the camera away for 5 to 10 seconds and then back? Or basic conversation scene with the camera cutting between being located behind either speaker once every few seconds?

    Basically proper working persistence of the scene.

  • by whimsicalism on 9/30/25, 5:27 PM

    Find this sort of innovation far less interesting or exciting than the text & speech work, but it seems to be a primary driver of adoption for the median person in a way that text capability simply is not.
  • by nopinsight on 9/30/25, 10:55 PM

    OpenAI launches Sora 2 in a consumer app to collect RL feedback en masse and improve their world models further.

    Their ultimate goal is physical AGI, although it wouldn’t hurt them if the social network takes off as well.

  • by squidsoup on 9/30/25, 8:20 PM

    A little tangential to this announcement, but is anyone aware of any clean/ethical models for AI video or image generation (i.e. not trained on copyright work?) that are available publicly?
  • by Havoc on 9/30/25, 10:37 PM

    That sure seems to be getting close to something usable for movies...kinda.

    Sam looks weirdly like Cillian Murphy in Oppenheimer in some shots. I wonder whether there was dataset bleedover from that.

  • by tptacek on 9/30/25, 7:18 PM

    If I was on the OpenAI marketing team I maybe wouldn't have included the phrase "and letting your friends cast you in their [videos]". It's a little chilling.
  • by Lucasoato on 9/30/25, 11:03 PM

    > this app is not available in your country or region
  • by sandspar on 10/2/25, 6:46 AM

    Sora 2 is a lot of fun. Using it feels like a glimpse into the future. The last time I felt this was with the Sesame voice demo.
  • by GaggiX on 9/30/25, 6:28 PM

    The model's quality is incredible, but more tools are needed to take advantage of its capabilities, this is kinda the magic of open models.
  • by natiman1000 on 10/1/25, 12:36 AM

    The fact that no one talking about how it compares against Veo tells me everything I need to know. This page is now filled with some bots!
  • by dyauspitr on 9/30/25, 7:03 PM

    How did they generate the videos with Sam Altman. Did they just provide a picture of his face and then use him in their prompts?
  • by NoahZuniga on 9/30/25, 8:00 PM

    TTS is horrible compared to Google's veo 3
  • by VagabundoP on 9/30/25, 7:24 PM

    I hate this vacant technology tbh. Every video feels like distilled advert mindless slop.

    There's still something off about the movements, faces and eyes. Gollum features.

  • by LarsDu88 on 9/30/25, 8:09 PM

    I really hope they have more granular APIs around this.

    One use case I'm really excited about is simply making animated sprites and rotational transformations of artwork using these videogen models, but unlike with local open models, they never seem to expose things like depth estimation output heads, aspect ratio alteration, or other things that would actually make these useful tools beyond shortform content generation.

  • by alkonaut on 9/30/25, 6:41 PM

    How far out are we from doing this in real time? What’s the processing/rendering time per frame?
  • by unethical_ban on 9/30/25, 7:21 PM

    I just had a thought: (spoilers Expanse and Hyperion and Fire Upon the Deep)

    Multiple sci-fi-fantasy tales have been written about technology getting so out of control, either through its own doing or by abuse by a malevolent controller, that society must sever itself from that technology very intentionally and permanently.

    I think the idea of AGI and transhumanism is that moment for society. I think it's hard to put the genie back in the bottle because multiple adversarial powers are racing to be more powerful than the rest, but maybe the best thing for society would be if every tensor chip disintegrated the moment they came into existence.

    I don't see how society is better when everyone can run their own gooner simulation and share it with videos made of their high school classmates. Or how we'll benefit from being unable to trust any photo or video we see without trusting who sends it to you, and even then doubting its veracity. Not being able to hear your spouse's voice on the phone without checking the post-quantum digital signature of their transmission for authenticity.

    Society is heading to a less stable, less certain moment than any point in its history, and it is happening within our lifetime.

  • by thebiglebrewski on 9/30/25, 6:25 PM

    Can this be used to make hyper-realistic video games, or it's not that real-time yet?
  • by kaicianflone on 9/30/25, 7:05 PM

    Why is the video player so laggy?
  • by qgin on 9/30/25, 6:53 PM

    VFX artists are definitely feeling the AGI / considering other career paths today.
  • by taikahessu on 10/1/25, 12:58 PM

    Entering code 123456 reveals Sora 2 is only available in US/Canada region.
  • by bergheim on 9/30/25, 6:19 PM

    We are just heading for Lovely All TM.

    I kid.

    Art should require effort. And by that I mean effort on the part of the artist. Not environmental damage. I am SO tired of non tech friends SWOONING me with some song they made in 0.3 seconds. I tell them, sarcastically, that I am indeed very impressed with their endeavors.

    I know many people will disagree with me here, but I would be heart broken if it turned out someone like Nick Cave was AI generated.

    And of course this goes into a philosophical debate. What does it matter if it was generated by AI?

    And that's where we are heading. But for me I feel effort is required, where we are going means close to 0 effort required. Someone here said that just raises the bar for good movies. I say that mostly means we will get 1 billion movies. Most are "free" to produce and displaces the 0.0001% human made/good stuff. I dunno. Whoever had the PR machine on point got the blockbuster. Not weird, since the studio tried 300 000 000 of them at the same time.

    Who the fuck wants that?

    I feel like that ship in Wall-E. Let's invest in slurpies.

    Anyway; AI is here and all of that, we are all embracing it. Will be interesting to see how all this ends once the fallout lands.

    Sorry for a comment that feels all over the place; on the tram :)

  • by colonial on 9/30/25, 6:16 PM

    Cool - now let's see how much it costs in compute to generate a single clip. (Also, notice how no individual scene is longer than a handful of seconds?)
  • by nickbettuzzi on 10/3/25, 2:27 AM

    hi there! would love an invite code if anyone reading this has a spare. really interesting stuff— thank you in advance! email is nick@usmobile.com
  • by drcongo on 9/30/25, 7:18 PM

    The AI generated Sam Altman doesn't look even vaguely human.
  • by 2OEH8eoCRo0 on 9/30/25, 5:22 PM

    Can it generate an analog clock displaying a given time?
  • by rvz on 9/30/25, 8:05 PM

    12,000+ "AI startups" have been obliterated.
  • by carabiner on 9/30/25, 7:34 PM

    CEO of Loopt makes a cameo at 1:28 in the youtube vid.
  • by beders on 9/30/25, 6:41 PM

    Can I finally redo the Star Wars sequels with this? :)
  • by Josh5 on 9/30/25, 9:48 PM

    Everyone has the widest eyes in these Sora videos.
  • by FrustratedMonky on 10/1/25, 1:26 AM

    Yeah, we've "plateaued" all right.
  • by basisword on 9/30/25, 6:11 PM

    Tens of billions in funding and they've just built a modern version of JibJab[1]. Can't wait to start receiving this in reply-all family emails.

    [1] https://youtu.be/z8Q-sRdV7SY?si=NjuyzL1zzq6IWPAe

  • by outside1234 on 10/1/25, 4:08 AM

    This is going to be a disaster. We are never going to be able to trust a video again and in short order propagandists are going to be using this to generate god knows what.
  • by taytus on 9/30/25, 5:26 PM

    Honest question: What problem does this solve?
  • by dvngnt_ on 9/30/25, 5:10 PM

    After using Wan with comfyui, im uninterested in closed platforms. they lack the amount of control even if the quality might be better.
  • by barbarr on 9/30/25, 6:32 PM

    Instagram reels are gonna get crazy
  • by andybak on 9/30/25, 5:45 PM

    I've got used to immediately checking availability. In this case - iPhone app is US + Canada only and the website is invite only.

    Going back to sleep. Wake me up when it's available to me.

  • by boh on 9/30/25, 7:18 PM

    This is the kind of thing people get excited about for the first couple of months and then barely use it going forward. It's amazing how quickly the novelty of this amazing technology wears off. You realize how necessary meaning/identity/narrative is to media and how empty it gets (regardless of the output) when those elements are missing.
  • by carrozo on 9/30/25, 7:44 PM

    Sora 2: Sloppy Seconds
  • by baby on 10/1/25, 12:42 AM

    No android app right?
  • by amelius on 9/30/25, 9:30 PM

    Nicely cherry-picked.
  • by dcreater on 9/30/25, 10:51 PM

    Matrix here we come!
  • by ezomode on 9/30/25, 9:36 PM

    full-on productisation effort -> no AGI in sight
  • by mrcino on 9/30/25, 7:33 PM

    So, this is the AI Slop generator for the AI SlipSlop that Altman has announced lately.

    Brave new internet, where humans are not needed for any "social" media anymore, AI will generate slop for bots without any human interaction in an endless cycle.

  • by fersarr on 9/30/25, 8:58 PM

    Only iphone...
  • by _ZeD_ on 10/1/25, 4:30 AM

    Sora 2: Frato
  • by ambicapter on 9/30/25, 7:11 PM

    AI Sam Altman is terrifying, holy shit. Squarely in uncanny valley for me.
  • by umrashrf on 9/30/25, 10:47 PM

    hey @simoncion looks like they are doing this for self-promotion that's against the site's guidelines
  • by sudohalt on 9/30/25, 6:05 PM

    Now videos will be generated on the fly based on your preference. You will never put your phone down, it will detect when your sad or happy and generate videos accordingly
  • by egeres on 9/30/25, 8:20 PM

    I wonder how this will affect the large cinema production companies (Disney, WB, Universal, Sony, Paramount, 20th century...). The global film market share was estimated to be 100B in 2023. If the production cost of high FX movies like Avengers Infinity War goes down from 300M$ to just 10K$ in a couple of years, will companies like Disney restrain themselves to just release a few epic movies per year? Or will we be flooded with tons of slop? If this kind of AI content keeps getting better, how will movies sustain our attention and feel 'special'? Will people not care if an actor is AI or real?
  • by apetresc on 9/30/25, 7:52 PM

    If anyone is feeling generous with one of their four invite codes, I'd really appreciate it. I'm at adrian@apetre.sc.
  • by MangoToupe on 9/30/25, 6:39 PM

    Interesting that they're going with a "copyright opt-out": https://www.reuters.com/technology/openais-new-sora-video-ge...

    I guess copyright is pretty much dead now that the economy relies on violating it. Too bad those of us not invested into AI still won't be able to freely trade data as we please....

  • by dolebirchwood on 9/30/25, 9:21 PM

    This makes me less excited about the future of video, not more.

    It's technically impressive, but all so very soulless.

    When everything fake feels real, will everything real feel fake?

  • by LocalH on 10/1/25, 4:40 AM

    We're cooked.
  • by type0 on 10/2/25, 9:18 AM

    it should be renamed into sore ai
  • by ionwake on 9/30/25, 7:38 PM

    I think HN is too political like this tech is clearly amazing and it’s great they shipped it there should be more props even if it’s a billion dollar company.
  • by yahoozoo on 9/30/25, 10:38 PM

    Sam still pretending they’re close to AGI in the trailer lmao
  • by gainda on 9/30/25, 7:00 PM

    impressive engineering that's hard to see as a net good for humanity.

    it doesn't spark optimism or joy about the future of engaging with the internet & content which was already at a low point.

    old is gold, even more so

  • by CSMastermind on 9/30/25, 7:46 PM

    Anyone have an invite they want to share with me lol.
  • by groos on 9/30/25, 10:28 PM

    What is the point? Who wants to watch these videos?
  • by dragonwriter on 9/30/25, 6:55 PM

    “With Sora 2, we are jumping straight to what we think may be the GPT‑3.5 moment for video.”

    I think feeling like you need to use that in marketing copy is a pretty good clue in itself both that its not, and that you don’t believe it is so much as desperately wish it would be.

  • by bovermyer on 9/30/25, 7:14 PM

    "Thou shalt not create a machine in the likeness of a human mind."
  • by deng on 9/30/25, 6:46 PM

    As usual: impressive until you look close. Just freeze the frame and you see all the typical slop errors: pretty much any kind of writing is a garbled mess (look at the camera in the beginning). The horn of the unicorn sits on the bridle. The buttons on Sam's circus uniform hover in the air. There are candleholders with somehow candles inside as well as on top. The miniature instruments often make no sense. The conductor has 4 fingers on one hand and 5 on the other. The cheers of the audience is basically brown noise. Nedless to say, if you freeze the audience, hands are literally all over the place. Of course, everything conveniently has a ton of motion blur so you cannot see any detail.

    I know, I know. Most people don't care. How exciting.

  • by dweekly on 9/30/25, 5:26 PM

    So a social network that's 100% your friends doing silly AI things?

    I feel like this is the ultimate extension of "it feels like my feed is just the artificial version of what's happening my friends and doesn't really tell me anything about how they're actually faring."

  • by dwa3592 on 9/30/25, 7:16 PM

    I don't know if it's just me or other people are feeling it as well. I don't enjoy videos anymore (unless live sports). I don't enjoy reading on my monitor anymore, I have been going back to physical books more often. I am in my early thirties.

    The point is that sora2 demo videos seemed impressive but I just didn't feel any real excitement. I am not sure who this is really helping.

  • by S0und on 9/30/25, 5:17 PM

    I find it comical that OpenAI with all the power of CharGPT even them are unable to release an app for both iOS and Android at the same time. Wow, good marketing for Codex.
  • by m3kw9 on 9/30/25, 6:05 PM

    I’m eagerly awaiting for some unexpected social problems this crops up
  • by beernet on 9/30/25, 5:13 PM

    Overall, appears rather underwhelming. Long way to go still for video generation. Also, launching this as a social app seems like yet another desperate try to productize and monetize their tech, but this is the position big VC money forces you into.
  • by marcofloriano on 9/30/25, 7:17 PM

    Every AI video demonstration is always about funny stuff and fancy situations. We never see videos on art, history, literature, poetry, religion (imagine building a video about the moment Jesus was born) ... ducks in a race !? Come on ...

    So much visual power, yet so little soul power. We are dying.

  • by ChrisArchitect on 9/30/25, 5:49 PM

  • by ath3nd on 9/30/25, 7:16 PM

    OpenAI is cooked.

    Absolutely cooked.

    After the disaster that was chatGPT4.001, study mode and now this: an impossibly expensive to maintain AI video slop copyright violater, their releases are uninspired and bland, and smelling of desperation.

    Making me giddy for their imminent collapse.

  • by pton_xd on 9/30/25, 5:21 PM

    Someone remind me the benefits of mass produced fake videos again?
  • by iLoveOncall on 9/30/25, 7:37 PM

    Show me a coherent video that lasts more than 5 seconds and was generated with the model and maybe I'll start to care.
  • by mclightning on 9/30/25, 7:35 PM

    It is very underwhelming. It seems like a step backward. Scam altman should be replaced before he runs the company to bankruptcy.
  • by tonyabracadabra on 9/30/25, 11:07 PM

    If Sora 2 is aiming for AI‑Tok, ScaryStories Live is the jump-scare cousin: real‑time POV horror from a photo + a sentence. No film school, no GPU farm—just “upload face, pick fear level, go.” It’s less cinema, more haunted mirror, and it ships in seconds. scarystories.live