Jump to content

maher

Members
  • Posts

    98
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by maher

  1. You are right. It has some nice little enhancements but I'm kind of mad with Apple right now.

    200 new features? An enhancement on an existing product is not a new feature.

    One field for typing both search terms and web addresses was in Chrome, full screen, Tab view, smooth scrolling and 6% faster performance are to be considered new features? NOT.

    How about giving me back my iDisk or the equivalent of Dropbox in iCloud?

    Their marketing strategy is making pebbles looking like mountains, they should call the update Pebble Lion...

    I needed the venting... I should let it go now and look at the positive side of this upgrade.

    Again, you are right. It has some nice little enhancements.

    http://www.engadget....on-10-8-review/

    Ouelle Richarde, u r forget thing ouane amézingue fiture frome Mount tainted Lionel!

    It ize the ante grated dictation alla Siri.

    I kanot leave without tits.

    Maher

  2. OK, this is probably a very dumb question, but it bother me from a long time :

    I thought that most modern render engines take into account only visible polygons. I understand that when doing GI, invisible objects and polygons into a scene can contribute indirectly to the lightning of visible polygons.

    I understand also that in a multi-million polygons scenes, you selectively and manually, remove objects that doesn't matter too much in the scene when they are out of the camera field, for faster rendering time.

    My question is, why don't someone code the rendering engine so that happen automatically, in a first pass calculation, removing polygons that are at out of the field from some defined distance or angle? Why not reducing automatically those objects by decimating their polygons counts in the same way, when it is important to take them into account for GI secondary influence?

    Maher

  3. Diego, why do you animate the stereo separation? In real life our eyes always stay at the same distance from each other; they just convergence as needed to a new point of interest.

    I always use convergence by reference. You can attached the reference point of the camera to a null object that you keep at the approximate same distance than the main point of interest of your view.

    If I use convergence at infinity, I end up with movies like the one produced by the GoPro Hero 3D kit. The software they provide allow to keyframe the convergence zone through the movie. We could always use such a feature with CG stereoscopic renders but we need to frame a bit wider and render at some bigger resolution in order the compensate for the zooming needed to overpass the border offset caused by the change of convergence point.

  4. Hi Diego,

    you nail it! My camera was still in stereo mode...

    Yes I can see side by side stereo by crossing my eyes, but I always wonder what everyone would look like if we have to do stereo iChat with this technique ;-)

    Thanks again Diego and Thomas!

    Maher

  5. Hello Diego,

    The active shutter glasses had an edge over polarized glasses in term of resolution only on actual 3D TV.

    In theaters, the RealD system use circular polarized glasses and a unique projector (not 2 as in iMAX) with an electronic apparatus that alternate, for each full resolution frame, the circular polarization of the light going out of the projector.

    Samsung is working with RealD to apply such a system to 3D TV set (active 3D TV at full resolution and passive polarized glasses).

    Now translate such a system to a computer monitor that produce on the fly 3D CG scenes with head tracking to recalculate the virtual camera position and you get a real immersive system.

    One of my graduation fellow worked as team manager on such a system at the Montreal flight simulation company CAE. They projected the CG data directly on the retina, tracking the fovea and computing a high resolution image of 1024 x 1024 just for that high details region. For the peripheral vision, they generate a «low-resolution» image of also 1024 x 1024. The first prototype produce the equivalent resolution of half what human vision can perceive and they get even with the 2nd generation. Alas, that incredible project was too expensive even for the military and they never gone into production...

  6. Hello Diego,

    why do you find the active shutter glasses better than the polarized one?

    All those 3D TV sets with active glasses are staying on the shelves of the electronic stores because people doesn't want to bother with recharging units that rapidly decay and heavy expensive glasses. The 3D TV sets with passive glasses will probably supplant the first 3D TV generation. For now they have to cut in half the vertical resolution but if Samsung can really produce an hybrid solution (active 3D TV at full resolution and passive polarized glasses) at a decent price, I think it will be a winner.

  7. Hello all,

    I'm not a big fan of 3D TV sets and stereoscopic movies but I always wanted to do stereoscopic CG animations just for the fun of it. Stereoscopic CG scenes is just one step closer to a perfect simulation of a real, or a totally unreal universe if you are a sci-fi fan like me.

    Since the prices are going down for 3D TV set, due to lack of public interest, or the lack of pertinent content for the matter, I'm considering the cheapest way to produce 3D blu-ray tests from my EIAS files. The cost of professional mastering 3D softwares was out of reach until recently for a hobbyist. But thing may change soon :

    The company that produce those cheap HD action camera called GoPro made a kit where you put 2 of those tiny cam in a waterproof case and sync them together to produce parallax movies that you mux together with their free software called GoPro Cineform Studio (in fact, GoPro bought Cineform to make a minimal version of their pro app).

    GoPro Cineform Studio is free, but the catch is that it only mux files from the GoPro Cam. After trying, with mixed results, to hack my EIAS rendering to make them work with the app, I decide to download the trial version of the professional version of Cineform (NeoMac).

    The trial app work without limitations for 10 days and is a lot more complex than the GoPro Cineform Studio. The good news is that after contacting the support team at Cineform, I've learned that they're planning to release a version called GoPro CineForm Studio Premium that will work with any source files. I don't know yet the price tag of that piece of software, but it should logically be under the $300 mark of Neo, way under the $1K and more price of other solutions.

    The technology they developed at Cineform is very wise. They multiplexed in a QuickTime file metadata so the left/right channels are hidden from the QT player or the NLE app. You don't need to import left/right files separately and be careful to keep the syncing and applying effects on both tracks. You just worked as if the import files are 2D. A plug-in add a permanent menu that let you choose the way the QT file must be displayed (anaglyph, side by side, interlaced fields, etc...).

    Now, I have to explore how to burn on a Mac a 3D Blu-ray compatible disc that will correctly play on a 3D TV set.

    Anyone already find such a solution?

  8. This is my daughter's first attempt at 3D modeling...

    Perfect loops, all normals pointing outwards, excellent skin texture with sub surface scattering around the ears and nostrils. Outstanding job!

    I'm a VERY happy grandfather and proud to present you Michaël, son of Dominique and Philippe.

    Hey congratulations, Richard, Dominique and Philippe!

    You should taught Dominique to use EIAS 3D next time. 9 months of rendering is way too much ;-)

    Maher

×
×
  • Create New...