Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TokenFlow: Consistent diffusion features for consistent video editing (diffusion-tokenflow.github.io)
173 points by puttycat on July 22, 2023 | hide | past | favorite | 23 comments


long long ago i had some fun stitching frames into a panorama from some really old footage (by hand). The most hilarious trick was to upscale the footage a lot so that frames could be visually aligned, made transparent and improve the resolution. The grainy snow turned into a sharp picture with new previously invisible details


"code coming soon" I really really hope this code will be ready is setup and run. My experience with ML/AI research GitHub code is pretty bad. Most of the time you have to guess the right dependencies (and versions).


I would prefer incomplete or barely working code over no code.

GitHub is the perfect place to post incomplete code, because someone else can contribute fixes.

If you're a scientist, please don't hold back publishing code just because someone else demands a certain code quality.


95% of problems would be solved by locking the entire dependency tree.


yeah it's especially annoying on which CUDA version you have to guess to make it work. Sometimes projects don't even produce conda environments properly.


The more I see things like this, the more I wonder why there is no good video upscaler yet. Something that can really upscale old DVD or even VHS-quality video.

Topaz Video Enhance AI has barely advanced in the past few years, old versions sometimes even produce better results than new versions. And even then, some of the enhancements can be achieved with some carefully chosen non-AI filters.


We're getting there. Temporally stable video generation is not trivial and is only now slowly solved. Upscaling of single images in a generative fashion (as in, actually adding in details) works really well, you can create images with an incredible amount of detail. Doing the same for video is around the corner.


This instagram account makes some really neat stuff leaning into temporal inconsistencies

https://www.instagram.com/never_ever_never_land/


This webpage caused my laptop (4 GB) to OOM. Really cool though!


My browser tries to load all the videos at once...can't watch any of them yet.


The videos are working now! Wow! Getting closer to perfection!

In some styles this is good enough!


ohhhh, why not try CDN to serve the video


they messed up AWS access to the videos, as of time of writing

they should upload to ipfs, pin with filecoin, and let a CDN like cloudfront’s serve the video


Or... Just upload to YouTube and embed it.

And YouTube will deal with transcoding to different resolutions for different devices, auto select what quality to send depending on bandwidth, ensure it works on all popular platforms, and all the other bits of video hosting that these researchers didn't want to do.


I was assuming they had a reason for not doing the simple thing and offered something similar


[flagged]


"Eschew flamebait. Avoid generic tangents."

"Please don't fulminate."

https://news.ycombinator.com/newsguidelines.html


That doesn't have anything to do with the comment you replied to. It isn't flamebait, a tangent, or fulmination.


In the sense of those terms which we use while moderating this site, it was all of those things. I could also have added:

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

and

"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3." https://news.ycombinator.com/newsguidelines.html


They absolutely think about the consequences of their actions. What makes you think for a second that they don't?

> Perhaps we should slow down this frenetic pace of AI development and think about it for a bit?

Perhaps you should actually talk to them sometime and get a clue.


I have already tried talking to them, and they are generally dismissive or unwilling to talk. I understand quite a lot about AI. Not only was I a professional programmer for years, I also have a PhD in mathematics. So, I have a clue.

I understand academia quite well. Researchers are much more concerned about publications and rarely, if ever consider the long-term societal implications of their research.

Don't believe me? Email a few and ask them if they've ever considered the long-term consequences of introducing AI into the world.

Perhaps my original post was a bit inflammatory but it does make me angry how so many people simply release whatever new technology they can invent even though it might be quite harmful.


You haven't been inflammatory at all and don't let anyone tell you otherwise. Reckless irresponsibility is preached as a moral good in tech and unfortunately it has gained many zealous, pious, followers.

This isn't a time for sugarcoating. People need to hear the honest facts. If those facts offend them or run counter to their personal opinions, then they *definitely* need to hear them, and they need to hear it from people who are firm and unapologetic in delivering it.


Thank you for the support. Of course, I'm not trying to attack anyone. I was a researcher with peer-reviewed publications and one of my main motivations was pure intellectual curiosity. Unfortunately, I didn't give much thought to the responsibility side of the equation until recently.

And it's a fact that research in science now has pretty much zero requirement for ethical accounting. You can publish anything these days, and our knowledge and power only grows while responsibility does not grow with it.

The specific technology cited in the link is dangerous, and I have attempted to email quite a few researchers to engage in fairly polite discussion on it. I'm not surprised many people downvoted me, but I believe what I am saying is quite germane to the specific discussion.

I believe firmly, withouth any doubt whatsoever, that every single discussion on ANY AI technology must be accompanied by a reminder of extreme caution and criticism because it's just too powerful to ignore.

I also believe strong words are necessary, even if some people are offended. If someone gets a job at an industrial plant and ignores safety regulations, then that person would certainly receive some very strong words from their supervisor, because failure to do so might cause an industrial acccident.

That's what's happening here. We are essentially ignoring what should be safety regulations, and we really should speak up about them or else humanity will be in serious danger.


I tend to somewhat agree. This [0] repository was posted here last week and really made me think about much of what you are discussing here.

[0] https://github.com/s0md3v/roop




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: