https://i.ytimg.com/vi/FJIEH9PsN38/hqdefault.jpg
NOTE: Not trying to be clickbaity with the title (I’ll change it once this feature is more known), but I really want to get people’s attention as this update is huge!
• Blender Daily Builds: https://builder.blender.org/download/
• Auto Tile Size + Tile Sizes Explained: http://adaptivesamples.com/2013/09/11/auto-tile-size-addon/
• Mike Pan (Car Scene): http://www.MikePan.com
• Facebook: https://www.facebook.com/RonnieNeeley3D/
Music: Atnos & Echomaker – Submerged (Unreleased)
Guys, whoa. I’m still in shock. Only found out about this from a random post on the Blender subreddit (which didn’t even get much traction). For those that don’t wish to wait for an official release, here is how to render with GPU and CPU now!
source
27 responses to “(New Secret Feature) Render With GPU and CPU in Blender”
It's very far outdated, please do some update
I just knocked a 6.5 hour render down to 4 hours. Using an older gaming laptop with i7 2.5Ghz and GTX 860m. Thanks so much!!
When RAM Render will be available ?
It’s 2018 and there is no such combined setting.
dont know why, but in my case when i use cpu+gpu with tile size 64 (much slower when 32 or 128) it's about 15 seconds slower, than just gpu with 256 tile
ps. i5-7300HQ with 1060 max-q
I keep getting the this error "split kernal error: fail to load kernal_path_int" when I try to render. I'm using a full amd build ryzen 3 + rx 570
Without CPU only rendering times in the bench marks this still feels a lot like click bait.
For my rig i think the best result are using only GPU
I got i5 6400 and gtx 1060
With Gpu (gtx 1060) it renders 2x faster
I think ideally, you'd have a powerful GPU that would render starting with the most complex tiles in a scene, the CPU cores would start with the least complex tiles, and they'd all meet somewhere in the middle. I'm not sure how it would be implemented, but seems like it would be the most efficient scenario.
new feature? this has ben around for AGES
o me it happens to me the reverse, I do not go the rendereo by gpu and I have a nvidea gt730 of 4 g of v ram- it does not even recognize me in my compu with gtx 1080 i
It is ok, but you dont really explain how you do to make appear this option! Im Just have one and that is the graphic card.
in the render tab at device it said gpu compute. does it still render with both cpu and gpu?
Hello everyone ! I've gt a little problem : I installed the build linked in the description, but when I go in user preferences, I only have a big "None" button under the "compute" section… But in the official 2.79 version, I can select my nvidia without any problem ! So, do anyone have a solution for me, please ? Thanks 🙂
Sadly i'm getting a "Split kernel error: failed to load kernel_path_init" when i try to render with cpu and gpu. radeon r390, updated drivers.
If anyone ran into the same problem and knows a fix, hey, let me know :3
Yeah. They keep adding "experimental" features without even refining and finishing previous ones, like microdisplacements. Blender is starting to look like Autodesk. Sigh. It was too good to be true until now.
I tested it, it does not seem to be useful for small images as the GPU finishes much faster and I have to wait for the CPU to finish.
Hi Ronnie,
I would like to ask a question about this.
I've checked the preference like what you just did, but I can only saw my GPU, not CPU.
Is there other preference I missed for the CPU showing up?
Thank you
I saw Corsair Logo On the Graphics Card.
Damn
i put the tile 1024 and it worked fine, i got a gtx 1070
uuuhm hi i need help i got latest daily build and i dont see cuda or anything like that in system settings i only see none can someone help me with that i got the feb 6th 2018 build
I get the additional options on one computer, but not on another. Same version of Blender, same OS, same card manufacturer, fully upgraded drivers, but different model of cards, and CPU.
i have an rx 480 4gb gpu which takes 5x longer to render the same scene compared to a gtx 780 ti which is odd because the rx 480 should be roughly on par with a 780 ti i did select gpu render for both cards in the user preferences any idea what may be wrong
I used it, and it was slower than just using my CPU, and I have a Vega 64 with Ryzen 1600
is this dual compute only for cycles? or for internal?
ryzen doesnt work!!!
My result is Render sad – CPU + GPU 4.4 minutes and GPU only – 2.2 minutes. HERE screenshots – 1. https://i.imgur.com/v0gs4y0.png 2. https://i.imgur.com/tOxlWqg.png Afterword: When I set the size of the tiles – 64, then 2.2 minutes too 🙂 But I noticed that the CPY load is 100% and the Video card is 5-15% total.