-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Advice: Ability to create video? #79
Comments
@gateway A specific seed value would let you recreate the same stylized video. Specific seeds aren't required for stylizing video. I haven't seen anyone using neural-style-pt with proper optical flow yet, but from what I understand it shouldn't be to difficult to do. Another good starting point for getting video working might be following after https://github.com/manuelruder/artistic-videos, though it may be a bit out of date these days. Like neural-style-pt, it's based on the original neural-style. For optical flow, I'm not aware of what the latest implementations that you could use are. You try asking on the PyTorch forums, or depending on your skills PyTorch may make it easier to implement yourself. |
@ProGamerGov thanks for your reply.. Im kinda a hacker vs a python programmer, I program other languages but never got into python so my skills are more setup stuff, hack in a few lines here but when it comes to PyTorch or other neural stuff I'm still in the learning and amazement period. Nvidia has their optical Flow SDK for GPU's that support Turning which I have. https://developer.nvidia.com/opticalflow-sdk Also I believe this can be compiled into OpenCV as an add-on. From everything I have read with video even tried out the one you mentioned they all still rely on cpu's, some have found a way to create a cluster of cpus which is helpful but nowhere near what a gpu can do of course. This was done with artistic videos or umm cysmiths, cant remember now.. https://youtu.be/72FIC0zJPMs (ignore the audio sync, I forgot to render the frames in ffmpeg to 24fps). Just cant way years for these frames to render, especially if you want do do at least 1080p.. anyhow was just trying to pick your brain on this.. |
@gateway PyTorch is light years ahead of other frameworks like TensorFlow in terms of how easy it is to do things. Python is also a pretty easy to learn language, at least compared to Java and Lua. I haven't done a lot of experimentation with video just because of how long it takes to test the code, but I'm excited to see what you could come up with! |
I've actually extended this repo to add flow-weighted video style transfer. |
@JCBrouwer I'm trying to give it a shot, have my conda env set up and all required modules, but when I try to run it I get
Its installed, shows up in conda list, and also tried to install it via pip which it also shows up.. ill create a ticket in your repo.. also just something to think about, it would be better for most people to use their internal ffmpeg, since mine is usually compiled with the cuda and nvenc encoding/decoding on the gpu. fixed by doing this, might be worth adding to the install.sh
|
Hi, thanks for providing this amazing style transfer tool. In the past I was using cysmiths nerual style tf transfer which stopped working for the new titan RTX card I put in. I see similarities in what you guys have done and have been testing your version with much joy (planning on doing a blog post about some settings I found useful).
Anyhow I was wondering if we could use this for video some how because some of the video style transfer stuff out their is 4-5 years old and use optical flow and deepmatching which is mainly cpu intensive and any gpu ones dont see to work with the system.
Any thoughts on anyone using your code to create videos? is the seed value what I would need to render out various frames along with maybe nvidias new optical flow system or?
Anyhow thank you again!
The text was updated successfully, but these errors were encountered: