Juanxer Posted August 30, 2010 Report Share Posted August 30, 2010 This comes from EIAS3D' honorable opposition :) but I think it is quite revealing: Brad Peebler from Luxology (Modo) talks about doing some testing rendering via GPU cards trying to attain those mythical x10 speeds one often hears about. Sadly, it's not there yet at all: one has to discard many rendering features the GPU is not capable of. So CPUs it is. See: GPU vs CPU Rendering Talk by Luxology it's sad that Intel abandoned the Larrabee project, which would have been the closest thing to solving this industry's needs. One hopes some upcoming CPU+GPU-in-a-core processors become interesting products (if the GPU shares memory with the CPU, you can use the GPU sort of like a super-SSD vectorial unit for partial calculations without the data transfer time penalties involved in moving things berween memory and the graphics card's). Anyway, the main thing is that no CUDA or OpenCL is going to be of much help in our field for rather a while. Quote Link to comment Share on other sites More sharing options...
Tomas Egger Posted August 30, 2010 Report Share Posted August 30, 2010 Ola, Its always interesting to see research. Thanks Tom Quote Link to comment Share on other sites More sharing options...
Phungus Posted September 9, 2010 Report Share Posted September 9, 2010 Who do you put your faith in? Luxology or ChaosGroup? Seems like there are others going that route to. http://www.chaosgroup.com/en/2/news.html?single=294 Quote Link to comment Share on other sites More sharing options...
Tomas Egger Posted September 9, 2010 Report Share Posted September 9, 2010 Ola Phungus, This theme have a lot of speculation, who is right? Luxology or Chaos? I really dont know. For sure, CPU is the first step in this area, then, maybe mix with GPU in the best areas which can be used. Thanks Tom Quote Link to comment Share on other sites More sharing options...
Juanxer Posted September 9, 2010 Author Report Share Posted September 9, 2010 If I do understand things right, the problem keeps being that GPU-based acceleration is renderer features-limited. Quote Link to comment Share on other sites More sharing options...
Gigayoda Posted September 9, 2010 Report Share Posted September 9, 2010 I'm no expert by any means but as I recall Intel is still trying to catch up to the real CUDA nvidia was developing with Apple along with openCL. So why did Lux go with intel as the advisory firm when they are still lacking GPU power? AMD or Nvidia should had been on that list. I do understand that the motherboard manufacturer will improve upon the bus that carries the data amongst all components but that review personally seemed a bit one sided. In the sense that Intel who is anti Nvidia would again prevent innovation after their Apple scuffle about GPU powered machines. In do time GPU AMD and Nvidia will be the ones that demonstrate what is capable. Intel has become bad a restricting other GPUs that are not their own. Hence CUDA is an Nvidia initiative and for LUX to use Intel as the company to test or gauge what CUDA can be is a major conflict of interest. My 2cents I do hope a solution from Nvida is released that will help refocus the GPU power that is hidden. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.