I’ve been re-watching star trek voyager recently, and I’ve heard when filming, they didn’t clear the wide angle of filming equipment, so it’s not as simple as just going back to the original film. With the advancement of AI, is it only a matter of time until older programs like this are released with more updated formats?

And if yes, do you think AI could also upgrade to 4K. So theoretically you could change a SD 4:3 program and make it 4k 16:9.

I’d imagine it would be easier for the early episodes of Futurama for example due to it being a cartoon and therefore less detailed.

  • CaptainBlagbird@lemmy.world
    link
    fedilink
    arrow-up
    72
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I think it would be possible. But adding previously unseen stuff would be changing/redirecting the movie/show.

    Each scene is set up and framed deliberately by the director, should AI just change that? It’s a similar problem like with pan-and-scan, where content was removed to fit 4:3.

    You wouldn’t want to add content to the left and right of the Mona Lisa, would you? And if so what? Continuing the landscape, which adds just more uninteresting parts? Now she is in a vast space, and you already changed the tone of the painting. Or would you add other people? This removes the focus from her, which is even worse. Well this is just a one frame example, there are even more problems with moving pictures.

    It would be an interesting experiment, but imo it wouldn’t improve the quality of the medium, in contrary.

      • Honytawk@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        1 year ago

        I think both look great, better than the original because of the added content.

        You still get the same detail of the original, nothing about it is changed, but with a more wide view.

    • Pechente@feddit.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      But adding previously unseen stuff would be changing/redirecting the movie/show.

      You could see this with The Wire 16:9 remake. They rescanned the original negatives that were shot in 16:9 but framed and cropped to 4:3. As a result the framing felt a bit off and the whole thing felt a bit awkward / amateurish.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Sometimes thats true, but not all things in a shot are very important. There may be buildings or plants or people whose placement in the shot is not important. They only exist in the shot to communicate that the film is happening in a real living world. 99% of directors don’t care about where a tree in the background is, unless the tree is the subject of the shot.

      Ai improving a shot would be debatable, but it is definitely possible. 4:3 media on a 16:9 display is pretty annoying to most people seeing the black bars on the sides. Even if the AI only adds backgrounds or landscapes, simply removing the black bars would be an improvement enough for most viewers.

      • SSTF@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        If the AI is only drawing in unimportant objects, I wonder what the value is?

        At the risk of ruining the original framing, the potential gain is stuff you aren’t supposed to focus on?

        Who is out there watching classic TV shows who isn’t adapted to the old framing?

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      1 year ago

      I think you’re looking at it from the wrong direction. Instead of adding new stuff in to get the width, you could get AI to stretch the image to fit 16:9 and then redraw everything there to no longer look stretched out. Slim the people and words back down. Things like bottles on a table would be slimmed down to look like normal bottles but have the horizontal table be drawn a bit longer to fill in the space etc.

      If it were done this way there would be a minimal amount of things that the AI would have to artificially create that weren’t there in the original 4:3. It would just mostly be fixing things looking wider than they should look.

      • Kissaki@feddit.de
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 year ago

        Stretching while preserving proportions is still stretching. You change the spacing and relative sizing between objects.

        Framing is not only about the border of the frame.

        • ColeSloth@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          I mentioned how that would be taken care of with the bottles on tables description I made earlier. Also, the framing of shots would be changed very little.

          • Kissaki@feddit.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            I read the table example again and I don’t see how it describes a solution.

      • jungle@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        What about person A putting an arm over person B’s shoulder? That’d have to be a pretty long arm.

        • ColeSloth@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          If they were close enough in a 4:3 shot to do that, the stretching would be very minimal to go 16:9. Aside from that, ai could avoid changing spaces between physically interacting people and objects.

    • Honytawk@lemmy.zip
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      The only thing that would seem wrong is that the actors stand closer than they have to. But other than that, I doubt many would notice.

    • Nerd02@lemmy.basedcount.com
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      1 year ago

      Holy cow that is beyond impressive. Sure enough, sometimes it does hallucinate a bit, but it’s already quite wild. Can’t help but wonder where we’ll be in the next 5-10 years.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        Eh, doing this on cherrypicked stationary scenes and then cherrypicking the results isn’t that impressive. I’ll be REALLY impressed when AI can extrapolate someone walking into frame.

    • nul@programming.dev
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The video seems a bit misleading in this context. It looks fine for what it is, but I don’t think they have accomplished what OP is describing. They’ve cherrypicked some still shots, used AI to add to the top and bottom of individual frames, and then gave the shot a slight zoom to create the illusion of motion.

      I don’t think the person who made the content was trying to be disingenuous, just pointing out that we’re still a long ways from convincingly filling in missing data like this for videos where the AI has to understand things like camera moves and object permanence. Still cool, though.

      • Crul@lemm.ee
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Great points. I agree.

        A proper working implementation for the general case is still far ahead and it would be much complex than this experiment. Not only it will need the usual frame-to-frame temporal coherence, but it will probably need to take into account info from potentially any frame in the whole video in order to be consistent with different camera angles of the same place.

      • Honytawk@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It is the first iteration of this technology, things will only improve the more we use it.

        That it can do still images is already infinitely more impressive than not being able to do it at all.

        • that’s weird. it’s actually a pretty useful feature, but it’s odd they’d add it to old reddit before new reddit, considering it’s basically deprecated. maybe it’s just an a/b rollout and i don’t have it yet

          i have old.reddit as default as well, but i’m not logged in on my phone browser and it wouldn’t open in my app

          • Crul@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            that’s weird. it’s actually a pretty useful feature, but it’s odd they’d add it to old reddit before new reddit, considering it’s basically deprecated. maybe it’s just an a/b rollout and i don’t have it yet

            Sorry, I think I didn’t explain my self correctly. That feature it’s a very old one, it has been on old reddit since I remember. It has also worked on new reddit at some point, see the screenshot below from a comment I posted 6 months ago:

            "View discussions in X other communities" feature in new reddit

            :

            In old reddit it's accessible from the "other discussions" tab

  • drdiddlybadger@pawb.social
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    1 year ago

    You should be able to but remember that aspect ratios and framing are done intentionally so what is generated won’t be at all true to what should be in scene once the frame is there. You’d be watching derivative media. Upscaling should be perfectly doable but eventually details will be generated that will not have originally existed in scenes as well.

    Probably would be fun eventually to try the conversion and see what differences you get.

    • Deestan@lemmy.world
      link
      fedilink
      arrow-up
      41
      arrow-down
      1
      ·
      1 year ago

      4:3 - Jumpscare, gremlin jumps in from off-camera.

      16:9 AI upsized - Gremlin hangs out awkwardly to the left of the characters for half a minute, then jumps in.

        • CeruleanRuin@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Well for sure there’s some value in it, but let’s not pretend it wouldn’t completely change the intention and impact.

          • averagedrunk@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I don’t know who would downvote that. You’re absolutely right. And I would still watch the hell out of that movie.

      • SSTF@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I was just thinking that. Or something like a comedy bit where the camera pans to a character who had just been out of frame.

        Overall it seems like impressive technology to be able to reform old media, but I’d rather put it to use in tastefully sharpening image quality rather than reframing images.

        • Deestan@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Haha, yes. I spent 15 minutes trying to remember the term for the pan/zoom-to-reveal comedy effect before giving up and settling on a botched jumpscare.

    • Ghost33313@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Exactly, and to add to it, you can’t know the director’s vision or opinion on how the framing should be adjusted. AI can make images easily but it won’t understand subtext and context that was intended. No time soon at least.

    • FelipeFelop@discuss.online
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Very true, I remember a few years ago someone converting old cartoons to a consistent 60 frames a second.

      If they’d asked an animator they’d have found out that animation purposely uses different rates of change to give a different feel to scenes. So the improvement actually ruined what they were trying to improve.

      • SSTF@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Yes, sometimes frame rates are intentional choices for artistic reasons and sometimes they are economic choices that animators work around.

        Old Looney Tunes used a lot of smear frames in order to speed up production. They were 24 frames per second broadcast on doubles, which meant 12 drawn frames per second with each frame being shown twice. The smear frames gave the impression of faster movement. Enhancing the frame rate on those would almost certainly make them look weird.

        If you want to see an artistic choice, the first Spiderverse movie is an easy example. It’s on doubles (or approximates being on doubles in CG) for most scenes to gives them a sort of almost stop motion look, and then goes into singles for action scenes to make them feel smoother.

    • SanguinePar@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Definitely. I remember hearing about The Wire being released in a 16:9 format, even though it was shot and broadcast in 4:3, and how that potentially messes up some of the shot framing.

      They did I by cropping from top and bottom rather than AI infilling, but the issue is the same.

      IIRC, David Simon wrote a really interesting piece about how they did it but did everything they could to try and stay true to Robert Colesberry’s carefully planned framing, as they were aware that had it been intended for 16:9 he’d have framed things differently.

      Personally I wish they had kept it at 4:3 and only released it in a higher resolution. Glad I still have my old 4:3 DVDs.

  • SSTF@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    1 year ago

    Who is the person that enjoys old shows, but also can’t get past the old aspect ratio?

    If the AI is just adding complimentary, unobtrusive parts to the shot, so as not to disrupt the original intent, I have to ask- is there really value being added? Why do this at all?

    George Lucas thought CGI could make the original Star Wars movies better.

    • Sethayy@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      A similar thought I’ve had is AI removal of laugh tracks (maybe introduce background based on non-laugh track scenes)

      This would make old Scooby doos actually watchable for me, so I can judge modifying the original a little if it’d what someone prefers - you can always just not watch it

      Thinking about it a bit more I 100% would be the type to use a 16:9 ratio, just cause I hate black bars

      • SSTF@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        I’m not 100% against tweaking old media (so long as it’s an alternative to the original rather than a replacement), I just think the effort and outcome of widening shots is misguided. More for live action than cartoons. Like if an actor is entering a scene from a side are we going to trust the AI to perfectly add them and merge them without it looking obvious and weird?

        For removing laugh tracks, the Scooby cartoons might be a good case for this. They seem paced properly without them. A lot of sitcoms pause for laugher so removing it makes the pacing weird. I’m sure the Big Bang Theory video with laugher removed proves how nightmarish scenes can become (you know in addition to being Big Bang Theory).

    • Honytawk@lemmy.zip
      link
      fedilink
      arrow-up
      1
      arrow-down
      8
      ·
      1 year ago

      I generally avoid old aspect ratio shows/movies, especially for animated stuff.

      They just look so extremely dated.

      • asdfasdfasdf@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        That’s a mental thing. You’re missing out. Same with people who can’t enjoy the enormous amount of fantastic black and white movies out there.

  • CeruleanRuin@lemmings.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Why would you want that? It’s always best to consume media in its purest form, and that means with its original aspect ratio. Resolution is something I’m flexible on, because I figure that filmmakers and tv directors in prior eras would have used HD if it was available, but aspect in general is tied to format, though it can be used to great effect to convey the space of a scene in different ways. Changing the ratio is akin to changing the color pallet. Might as well offer Instagram-style filters for older content while you’re at it.

    • Globulart@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Exactly, the filmmaker knew exactly what the aspect ratio was and framed shots specifically for it, why would anyone ever want this…?

      • Squirrel@thelemmy.club
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Ooo, maybe we can get a nice blurred copy of the picture to fill the edges of the screen, just like TikTok!

        I feel sick even jokingly suggesting that…

    • Honytawk@lemmy.zip
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      Because adding more detail to the sides doesn’t change the quality of the show? If anything, it improves it.

      I’m also not going to install some old CRT monitors to “consume media in its purest form”, because I want the best quality, not the quality the filmmaker wanted back in the day it was created.

      • Globulart@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        The best quality IS the quality the filmmaker wanted when it was created.

        People buy CRT TVs even now btw, the absence of input lag makes it ideal for competitive smash bros.

        Most people are perfectly happy watching an old film/show on a new screen because it can near as dammit replicate exactly what the director wanted us to see. Adding actual stuff to the framing of shots is a nuts idea and it absolutely changes the quality.

        To give an example, having someone’s face fill the screen versus being centred with more background visible around it changes the feel of the shot a LOT. 12 angry men is a good example here because the camera gets almost imperceptibly closer to the person speaking throughout the film to ramp up tension and draw the viewer into the story. It creates a feeling of claustrophobia that would be dulled if you added content to the edges of the screen.

  • TheInsane42@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    1 year ago

    Previous century I bought a TV that was 16:9 and had software to stretch 4:3 broadcasts to fit the screen. It chopped off a tad at the top and bottom and stretched the sides a tad to fill the screen… it was horrible. I’d rather have the dark borders at the sides over mutilated images. Somehow I doubt AI would be a bit more creative.

    • OhmsLawn@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Yeah it’s always better to watch the original aspect ratio. I remember renting VHS movies and being frustrated either way because the screen was so small. That isn’t a problem these days.

    • Couldbealeotard@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      The issues in Stargate were not because they were framing for 4:3, they were just clumsy mistakes.

      Stargate was always framed for widescreen.

      If you watch the HD versions of Atlantis you will also notice lots of focus issues. Not to mention across all 3 series there were plenty of things that ended up on screen by mistake: scripts laying around in a Goa’uld cargo hold, a Snickers wrapper in an alien space station, people holding up plants, and offensive remarks about the French.

      • SARGEx117@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Oh, for sure they made tons of mistakes, I just thought of that one as the most glaringly obvious “framing issue”. I’ve seen all the episodes as aired and as on DVD, with commentary and special features, because I’m a super nerd, and given your pfp and the fact you know how they framed it tells me you know a thing or two yourself!

        If I remember correctly, they framed it for wide-screen knowing that in the future it could be put out in other formats than TV, so wanted widescreen from the start. Bts footage shows the framing boxes for tv/wide on the monitors.

        I love catching mistakes and weird choices in my shows. For instance: in Firefly, Alan Tudyk is pretending to hold the controls of Serenity because they couldn’t have him up in the normal spot for framing reasons.

        But yeah, I’m actually rewatching SGA now and their weird focus issues (and let’s be honest, terrible backgrounds) are especially bad in season 1.

  • Echo Dot@feddit.uk
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    There is no original film for voyager it was filmed on tape.

    TNG used film so that can be rescanned but the original analogue broadcast is the literally the best quality we have of Voyager.

  • SHITPOSTING_ACCOUNT@feddit.de
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Adding imagery that reliably looks good is currently beyond what AI can do, but it’s likely going to become possible eventually. It’s fiction, so the AI making stuff up isn’t a problem.

    Upscaling is already something AI can do extremely well (again, if you’re ok with hallucinations).

    • MeatsOfRage@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’m not sure it’s really beyond the scope of AI. Stuff like stable diffusion in-painting / out-painting and some of the stuff Adobe was showing off at their recent keynote shows were already there.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Those are on a completely different level from having someone walk into frame though, and they still only work on small things that can be extrapolated from the image.

  • PopOfAfrica@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    Yeah, but why would you want to? It would have to generate new imagery to fill out the gaps. That’s bound to not look right. It at the very least would not be fitting the artist’s intention.

  • Potatos_are_not_friends@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    I used adobe’s Generative Fill tool (where it uses AI to fill in the blanks like adding more sky/backgrounds/hiding people) and it’s pretty miss. 20% of the time, it would kinda work. And I say that loosely.

    I think in a few years, we’ll get there. But not today.

  • Thirsty Hyena@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I was wondering the same last week, but for Buffy the Vampire Slayer tv series, which received a horrible HD release some years back.

    • tankplanker@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      For Buffy they recut the shot often using the raw footage, and they did so very cheaply so film equipment was often visible. They also didn’t address how bad the make up looked in HD, but then soft focus face filters are also garbage.

      The Simpsons when they tried they cut the frame, which is just laughably bad as it removed information and the context for scenes.

      When the Wire was done they got David Simon back to work on the conversion, he considers it a completely different cut of the show. I think this is the only way to do it as it means re framing the shot, this is a decision for the director, editor, and DP IMO

      AI making shit up to add to the frame removes the context for the shot. Nothing wrong with black bars for me, I just want good colour balance and upscaling.

  • Stamets@startrek.website
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Ive been rewatching Star Trek Voyager recently

    Good choice.

    I’d imagine that AI probably will be able to in 10-15 years. We already have that photoshop stuff that can analyze the surroundings and then fill in gaps/erase stuff. It’s not perfect but it’s the ground floor. I can only imagine that in the not too distant future it’ll be able to fill in the gaps of video too. Especially with a consistent set like Voyagers main engineering, as an example.

    Actually come to think of it, the same principle could be used to make VR environments too.