Jump to content

Igors

Private
  • Posts

    865
  • Joined

  • Last visited

Posts posted by Igors

  1. Hi, Dave, All

     

    1) Ability to get model/texture size - just added to TODO list

     

    2) Render Scheduler. Experimental idea (for discussion): "subscripting". Imagine user wants to modify Render Scheduler.for his needs and he knows what to do. But he opens script's code and ...confused. He sees a lot of code - where to place his portion in? In most cases (we guess) he wanted to modify workflow a little by adding some "callbacks" - for example add actions before any render pass is called. It's an easy script, but how to "localize" it?

     

    3) "Driven Key" - we learned it 5 (or more) years ago - so totally forgotten ;-) But we  remember well - EI anim channels are quite well to do this How it is in view of new Py Scripting? It's a wanted feature, if need to change host - we'll do

     

    Thx

  2. Hi, Dave, All

     

    1) A portion of lyrics/history: during years we ignored "script theme/development". Only now we begin to unserstand: it was our big mistake. In many and many cases scripts are much more operative, effective and easier to write compare to model plugins and other ways. The EIAS Python API is much more easy to expand compare to plugin/shader API. Another one positive side is a relative easy creation of advanced UI, for example no one EI plugin has "table" (like in Render Scheduler script) because it would take many days of work with C API - but only a hour with Python + Qt API.

     

    So active "script fans/writers" are welcome and we would be happy to expand Python API for their scripts. Of course everything has "pros and cons" let's discuss it below

     

    2)

     

    a) >> Ability to "get" texture/model size

    Sure, just add these requests to list

     

    b).>>  Can scripts be added to a toolbar/window (with custom icons?)

    Problematic because "how the "script palette" should be organized" is very/enough debatable, In perspective yes, but now we want to have much more usable scripts

     

    c) >> Can Python draw open GL GUI primitives?- Lines, Boxes, "arrows", referenced fact models?

    Nope, everything drawn in windows should be "project object(s)", e.g. nothing is anonymous/temporary. It's possible to create some new primitives that can be drawn but not rendered (like effectors) but we've no ideas/scenario how scripting can handle/manage them effectively/rationally. Anyway note that create/remove something in prj is not a problem for script.

     

    d) >>. Product array generator- duplicates a sample model (i.e. products on a shelf).

    How about to simplify/concretize task? For example use an effector as a bounding box? If so we can help with UI/Qt

     

    e) >>  New Heliodon Sun positioning tool (research math option vs. geometry version used in XP version.)
    >>  Master light controller- allows easy control of master lights with sliders. Color, Intensity, etc.

    What do you need for this in Python API ?

     

    f) >>. Measuring tool
    >> Dimensioning tool(would require display of "primitives" including text- would need to be scaled to the viewport)

    Same prob as above - imagine script can add some primitives and then delete them. It's possible but anyway we see no realistic "scenario" should go on.

     

    g).>> Photon project auto-setup script (specify Overhead light array, reflectors, etc.) and script generates a room environment

    Hmm... sounds a bit fantastic ;) How?

     

    h) >> Texture Scale Calculator/ Assigner (get texture size in pixels to calculate proper scale to specified world units- useful for architecture: Make selected floor tile texture 12 inches x 12 inches)

    After you get texture access - then tell us what you need more

    i). "To Do" Checklist in project file for tracking fixes and changes to a project while working.

    Ah! Old David's plugin. Sure - and it's easy with Qt

     

    j) >> Light Groups:

    We've already understood - this request is findamental/typcal. First please confirm it's a variant of "Render Scheduler". If so we'll think how to expand this basic script painless (maybe via sub-script) because now it's already big enough, so adding new options would be not easy. If you mean something other - let us know.

     

    Thx

    Igors

  3. Hola Diego

     

    It's a pipeline limitation, specuars of GI lights are unachievable for "illuminator" shaders like MForge. Setting "Use GI Sampling engine" tells render to calculate this light in GI pass and store illumination in GI buffer fоr further interpolation. If a material has highlight shaders (such as Anisotropic) their specular will be calculated for each ray. However there is no way to do same for MForge because it takes a whole control over material and can be called only once per (sub) pixel.

     

    Well, "nothing is perfect" (banal but true :shy:)

  4. Hi All

    Here are some tech explanations.

    About render speed:

    EI8 simply used "brute force" with RT reflections/refractions. For example a pixel with reflective material is shaded. With default AA 4x4 there are 16 initial reflection rays. For every ray hits something - do calc GI there. With default "GI secondary = 50" we've 16 * 50 = 800 rays. With mutual and/or blurred reflections this amount becomes huge because rays' propagation is kinda "chain reaction". So brute force provides quality, but with inacceptable slow render time. That's why EI9 uses principally new techniques for this.

    About hardware preview

    the render is different, but in a good way. The one thing I'm worried about though is the speed of the re-draw in hardware mode. Has the better quality representation come at the expense of speed? Or am I missing a setting or something? V8 was lighting faster in hardware mode, but v9 is tolerable
    Yes. hardware "phong" preview is faster in EI8 because it was not "phong" there :shy: (same "gourand" with limited abilities). EI9 provides true hardware phong preview using OpenGL shaders. In other words EI8 shaded "every vertex", but EI9 "every pixel" in this mode

    Generally a task "better speed and quality" has no limits, simply it should be better and better in any new version of app

  5. Hello Brian

    32bit camera = 33 sec,

    it uses 1 core for the "creating instances"

    then 1 core for "building bsp"

    then all cores for "calculating GI buffer"

    64 bit camera = 2 min 21sec

    it seems to use all the cores for all steps.

    but when you first launch camera, it just sits (doing something, not sure) for about 2 min.

    I then exported and imported the PlacerDeposit model, so Camera is rendering the same model but not with PlacerDeposit

    both 32 and 64 camera did it in 22 sec.

    Plugins are not multithreaded because only plugin itself knows what to do. Host also can't do other things before geometry is fully created. But there are no obstacles for developers to write multi-threaded plugins.

    A render time difference 32/64 is normal/expected in this case. When 64-bit render core calls 32-bit plugin a some time is wasted to switch between 64/32 and back. With intensive exchange (1.5 millions) the render times difference is noticeabe. With smaller geometry it's less. You need native 64-bit version of the plugin. Using 32-bits plugins is a temporary solution before Animator is moved to 64-bit

  6. Hello gentlemen

    We saw how Tom repulsed your persistent attacks ;), now we've a minute to say few words

    1) ETA is always a problem. We see no one lucky example. Remember EIM (Tesla) promised im March 2007, but where is it? Ok, so why EI9 is so long? There are 2 reasons why

    a) Every development plan is approximate at start, there are always things to solve on road, it's normal/unavoidable. A perfect/ideal/academic plan would never work in practice. Foe example: Rama bugs in EI8. Yes, it took several months and increased our timeline a lot. But what to do? Do not fix it? User will be unhappy (btw: same user who asked about fastest release). Same with GI, Camera Maps and many other - after learning prob in all details there is much more work than expected. We guess same in your art projects.

    b ) Second reason is just one word "re-architecturing". Many things in EIAS was designed really cool, and even now, many years later, we see no better solutions. Same time a lot is obsolete and should be re-designed, It's a normal process of evolution for any software. Yes, architecture changes do not produce immediate result always but they are necessary, otherwise we've no room for concrete final features for user. This aspect was ignored in prev EI versions and with EI9 we've got an effect of "awaken dog"

    Well, it's our first release, we guess from now things would go easier and faster ;)

    2) A promised full features list and (especially) public beta = a very risked venture (said softly). Or, in simple words it can kill product. We've learned effect of public beta - and for us it was totally counter-productive. User's appetite has no limits It's normal, but a great minus of public beta is: user momentary forgets about done things (really, they are already in his pocket, because promised). Instead his attention is concentrated on things he would like to see. Typically these ideas are quite rational but.. they should be matured, analyzed and implemented in next round of development.

    So we think only base/flag features (like MP/64 in EI9) should be announced before. Any release should be a holiday/surprise. :)

    That's all, thx for your understanding

  7. Hi All

    Thanks for your input. Some comments

    - generic 6DOF constraints (Ian) were added in first Bullet build :rolleyes: About others standard constraints - it's for you to solve how they are usable/necessary or, maybe, it's better to make accent on others features/improvements

    - Springs (Ian). Well, now we are applying efforts how to get rid of them :rolleyes: Attractor force creates this effect automatically/itself. Maybe to create "ideal Pendulum" we need to allow "Spring Factor" > 1

    - Blocking (Loon). It's already under construction, will be available in next build

    - "Reflected" forces (Loon). That's we don't know "how". A straigtforward implementation can be too complex but unsafe/unpredictable. Maybe a good idea is to think like "what workaround a user would do for such effect" and then help him with some "trick" options

    - "one more... is it possible to modify the Forces with Deform Editor?" (Loon). It was our original idea, but later we understood: "polygon force" (Diego) does this better and simpler. However, how about "vice versa" - new kind of Deform "Force". It's quite Ok with basic programming principle "one cow should give milk many times". In other hand we're afraid that eventual feature would be appreciated by users incorrect, like (irresponsible) promising of soft body dynamics - but in EI9 our goal is rigid bodies only.

    Summary: it would be nice to have "bullet features list" where we can see what should be done ASAP. what's later and what should be matured yet. Every feature is usabe but "how much" and what are priorities can ve seen with list only

    Thanks

  8. Hi All

    Thx for your participation. Diego, your "colonoid" is amazed, we could not resist and will add this ability too ;-)

    Generally all is going quite fine and forces are already functional. There is one problem appears here and there. The attached movie is a simplest "attractor" where you can see a "spring" effect. A moving body can't be stopped momentary, need to apply some force (during some time). Of course, it's a simulation and we should not follow by physics exactly, why not just set velocity to zero when the attractor's center is reached? It would be Ok for a single force only. But with 2 or more forces it's unknown "where velocity comes from", so setting velocity to zero we would devaluate effect of other forces.

    It would be nice to know how other apps do solve this problem (and what options they have)

    Thanks

    Preview.mov.zip

  9. Hi, Brian

    I often have a situation where, as an example, I create a model with BlobMaker. I can set the EIAS resolution to 5 and the Camera resolution to 30. I get a quick easy to deal with model in EI and nice smooth high res model in Camera. But, once rendering the animation I notice that Camera takes significant amount of time creating the mesh for each frame. If the Blobmaker settings are not animated it shouldn't need to do this. It should be able to create the model once and use the same model file for each frame.

    You're right and it's a common prob for plugins that create a large geometry. We'll think how to solve this. but nothing is promised (be honest no one idea is in our heads yet)

    Thanks

  10. Hello, Brian, All

    Animator 9 still will be 32-bit app, with 2Gb RAM barrier. However it's important to understand that 64-bit is not a "magic wind" to solve all RAM problems:

    - we've no doubts Brian always can create so dense mesh that's impossible to place into his 18 Gb (right?) room. Appetite has no limits.

    - before to create such meshes it's a good idea to think what to do then in Animator, how long they can be previewed etc.

    - can't say about Placer but BlobMaker and Mrs do use obsolete memory allocation approach and thus can't be ported to 64 immediately. Changing the approach would allow to use twice more RAM even in 32-bit.

    - and the last one:

    .. or just Camera?

    Hmm... it looks like Camera is just a small (unsignificant) detail, huh? :rolleyes:
  11. Hi All

    Let us add some explanations

    EITG was the publisher of the Konkeptoine products. They did not have ownership or source code. EIAS3D is now publishing many of these products, but it is up to the original owner of each of the products to decide if they wish to continue to support them.

    We are more than happy to publish 3rd party products on our web site and are eager to work any and all 3rd party developers.

  12. Hello fiberblast

    But, it's important that I'm able to work in 16bit during compositing, and deliver as such.

    Are there work arounds for 16 or 32bit renders?

    Of course 16/32 render is needed, but we think with 2Gb RAM limit it would be not very effective, cause such images eat significantly more memory. Therefore others features should be done first.

    Note: RPF_Saver has "non-clamped color" channel (32 bits) that can be used for post-processing

  13. Hello Brian

    Is there any reason why "Fix Smooth Shading" would change the number of vertices? The number of polygons remains consistent.

    Because FACT stores all info in vertices. FACT simplest cube has 24 vertices (3 per each corner). If we need different vertices normals at corners/edges, we must create extra vertices with same positions but different normals. Same with UV seams
  14. Hi, Richard, Ian, All

    It's a really good idea to have a LIST if features to do. Usually it's going as:

    - User: I've an idea

    - Developer: hmmm... the idea is good but

    Both are right but it makes no progress :rolleyes: But all can be changed if there is a LIST (EI8 UserPack is a good example). Ok, few points more to list:

    - remove Radiosity settings from Animator's UI (1-2 days of work)

    - add more options to Group "Shadng" Tab, like a popup wih choice

    - full GI calc (this group is "ideal mirror", it should use "Secondary" GI/Lights settings)

    - use "Secondary" lights

    - use Photons "Secondary"

  15. Hello Nigel

    About edges/corners blotching

    a) increase (baked) photon map quality as possible

    b) if you use "Light Customize" - do this only as "final polishing", it would be interested to see the image without any customize (if possible)

    About contact shadows. As we see the shadows are enough dark/contast but glass objects look "fly off". The attachment is an approximate glass material setup (just for our taste). Of course you need to use subtractive color filter for colored transparent objects. Also: if there is an ability to make the round table a bit less transparent - it could make life easier. Because "transparent on transparent" - where a shadow comes from? :rolleyes:

    post-11-13269393609217_thumb.png

  16. Hello Nigel

    Speaking technically this photon map is quite acceptable. Blotching is not dangerous because anyway GI averages collected reversed illumination. Bake the map (if it's not baked yet). Edges and corners should not be darker. Of course, more shadows or less etc - all this for artist/taste, it's controlled by lights' intensity, dropoff and bouncing settings.

    About transparency artifacts. First off check them with only simple light with shadows. If Ok, check photon maps settings (should be "GI & secondary". If nothing could help - please simplify prj and upload to BugsTracker here :rolleyes:

  17. Hi All

    1) About test project: if we consider 10 things together, so most probably no one problem will be found :rolleyes: Thus we've simplified prj, there are 2 lights: Light Object (white quad)) and RT light (no soft shadow, visible red spot in reflections). Both do emit photons

    2) Quadratic dropoff should be set for lights with photon emission. Otherwise photon map can be far away of direct/GI illumination. Although Area Lights (and Light Objects) can be used without such dropoffs, it does not produce correct results and usable "as is"

    3) Walls planes were made with "Cast Shadow" = off. Here it gives no speed advantages, but kills photons' bounces (if an object casts no shadow, so photons should pass thru as it's a fully clipped surface, isn't it?)

    4) The attached image shows reflections calculated fast (just use photon maps) and fully (rays count = 50 for both: GI and Area Light). Sure that fast reflections are less accurate but they can be usable (especially for blurred ones)

    post-11-13269393607828_thumb.png

  18. Hello Nige

    It's a typical room scene. Create Area lights for windows, setup their quadratic dropoffs and photon params. Use more photons (like 2 millions per light). Turn GI off (for a while), specify "Always Visible" map mode. Play with map settings until you get a draft illumination you like. It's 3/4 of all work, the rest is easy: turn GI on and set mode "GI & Secondary". But don't hurry with final render, take care of good photon illumination first.

    Good luck

  19. Hello Spleeve

    I was hoping that the Igors could chime in and tell us if this would work with Camera - if they haven't already done so somewhere else.

    Igors?

    Sorry for delayed answer (been hardly busy last days). The link is interested but, for concrete example, EI caustics uses RT only partially. Half or more of caustics calcs is made with Z-buffer just because it's faster. So in this context a hardware acceleration would not be very effective.

    Speaking of GPU and hardware render overview - it looks interested and promised but IMO can be considered only after standard/common resources (such as RAM. processors) are fully used and utilized. First basic, then advanced

  20. Hello James, All

    We realize it's a reasonable request, but its eventual implemention is not near easy. The details are:

    - as you know, buffer shadow is a (simplified) render pass that should be written in ccn file.

    - for a local render the Animator writes shadow pass for first frame and a shadow file reference for all further frames, so all is fine. Note that Animator also removes shadow file(s) before render starts.

    - in network render every frame (or strip) is written in separated ccn to be distributed over network in any order. Slave Cameras could not know either given frame first or last, it's just "render job" for them. Thus shadow pass(es) should be written in any ccn and should be processed by slave Cameras anytime. Also it's unclear how to check existed shadow file consistence and how to avoid numerous obsolete files on slave's side.

    All that does not mean "impossible", IMO it can be added for UserPack voting, we just explained why it's a solid portion of work.

×
×
  • Create New...