Jump to content

Igors

Private
  • Posts

    865
  • Joined

  • Last visited

Everything posted by Igors

  1. Hola Diego Thanks for explanations, clear now For MDD reader it makes no difference either animation scaled, rotated or modified in any way - everything is treated as "vertices", so RTS animation is read same as "not RTS". But any transform should be applied to model on export. For example in EI terms - you can export FACT with "Preserve Transformation" off - and original model and its scale values are stored. If this check is on - so a scaled model is exported. If MDD is written for a scaled model, export must be with scale too. You need to activate corresponded options in your exporter in order to have model/mdd frame in tact.
  2. Hola Diego We've no idea what is RST, please explain
  3. Hola Diego It doesn't disappear "at all", let's use built-in plugin's diagnostic. Select "MDD.plm Group" in Project Window and see "Info" Tab. We see 1634 vertices here. It's MDD file content (stored for frame 0). Let's select "Display points" and scale view - we see a capsule but in 100 times smaller compare to imported (that is linked to the plugin) one. Of course, such MDD data cannot be recognized. You need to write an MDD that matches with model - at least at one of frames. How - that depends on concrete MDD exporter, not on reader. It's easy to predict a possible objection - "why I should worry about match/mismatch? Can you just make a normal import/export?". Here we can't. because "format" is "format", that's we can't change in bounds of the task. MDD disadventages are continues of MDD advantages - or vice versa. Yes, it's pretty simple to store "vertices only" but same time it makes user responsible for providing a corresponded model - and it's not always easy/comfortable. That's why we avoided MDD and proposed Gnome instead - but it's another story :rolleyes:
  4. Hi, Mark MDD is just a "baking" of existing/created animation, not a way to create it. EI MDD reader can interpolate data between stored frames and play animation in arbitrary order/sequence - but it remains same animation essentially. Not everything can be stored in MDD because it requires exactly same count of stored vertices per frame. Thus no way for animations with mutable geometry, such as particles and RealFlow meshes. Another limitation is that vertices normals should be always recalculated and it can be a problem for some models (for example coming from NURBS). With all these limitations it can be a usable thing :rolleyes:
  5. Hi, Brian, All Here we've no idea what this eventual feature should do. "Capture vertices" (stretch) does not mark/save individual vertices. it just stores deform region (cube) in private data to have a blended stretch between saved and actual regions. For Bend it makes no sense. Bend prob is what to do with vertices outside region. Apparently we need "left unchanged" and "right fully rotated" but how to detect where is left/right for complex models? We agreed the actual method isn't perfect but we see no a better one
  6. Hi, Bill No technical probs here. But what to save? You started from material. Then light. Then also GI settings. We agreed, everything makes a sense. But how we can estimate a task if it's changing/mutable on fly? What is a finite list/forumula? For example it can be like:
  7. Hi, Brian, All Not a bug, bend does not do this. It uses each vertex' position to calculate actual bend angle. The Region 1 (magenta in attached image), Along X-axis, therefore: - if vertex X - if vertex X > 50, then 90.0 (maximal) angle is applied - if (0.0 With all top 3 regions everything goes fine - all right vertices are deformed with 90 degrees. But for Region 1 (bottom) the vertices of last segment appear inside X-range [0..50] again (as a result of previous deforms), so a fancy result happens. Note also that, regardless this region issue, bend can't close circle to make a full torus (in this case): rotation does not change radius, so left and right vertices will not coincide. Need to find a workaround, here artists can advice better than we
  8. Hi All Bill, we've read your explanations and found them quite rational and reasonable. But this formulation is too long to have any chance to win in UP voting. When a voting starts - it will be no time for debates/explanations of "what it does?". If something is unclear (even a little) - a feature is out of board momentary, UP works so. So please find a short/impressive formula for the feature (to avoid further unneeded questions) - then it makes a sene
  9. Hi All Please don't consider this post as something "official", here we just want to say/share our opinion. We always thought an "inegrated modeler" is a nice idea - but it's same hard (maybe impossible) to implement. For debelopers the problem is a "business scissors" (or other term in English) with this work. For example a tool to show model artifacts, "rat" places and/or "holes" in meshes, ability to visualize and modify vertices normals etc. Nice too IMOl, but, speaking realistically. there are zero chances to have it here. Why? Let's count what do we want from eventual developer: - real-time OGL preview (so +job) - ability to have different model's views (like front, side, orbit etc) (so +job) - (unavoidable) ability to select something to modify it ("just view" makes no big sense) (so ++job) etc. etc This list can be really long. but for any developer it's already enough to understand - there is a very solid portion of work. What are developer's reasons/interests to take this stone on their necks? And a normal answer is: it makes a sense to do such things in bounds of standalone/universal app (for all users) - but there are no enthusiasts to "project" this plan to any concrete app (no matter how big it is). Because "wide things" can be done only for "wide market". So no any "inegration". We saw a lot of attempts to change this order (with megatons of demagogy, "public opinion", intimidation, promising etc-etc) - but no one lucky :rolleyes: Either developeres have "their interest" and do their work - or not. No way to "push" them.
  10. Hi All We confirm: it's a bug in EI8 Animator.Fixed, thx for the report
  11. Hi, Bill, Ian, All The idea is interested but requires a further specification. Clear that all textures/procedurals should be collected. But what is a "material" in this context? Master material only? If, for example, user has applied a master and then customized something - so how it should be saved? Please make it well-defined, then let's add this "brown" feature to UP vote list.
  12. This is an OS bug (just one step more :rolleyes: ) Yes, there is one thing that's changed in EI8 for flares plugins: memory reallocation. The reason was that Darwin OS does not release memory as needed. Example: 1 Gb block was allocated, then this block size is shrinked to 1K. Alas, still 1 Gb is busy :dodgy: In practice it caused few flares to run render out of RAM. Thus allocation was changed in order to avoid this problem. As a side effect the address of reallocated block is always changed (no matter it grows or shrinks). It does not contradict to plugins API that never declared you can rely on unchanged address. Please check this. If it's not so, please equip us with the plugin. and we'll inform you all details what's changed on host's side Thanks
  13. Hi, monday1313 We've not too much to add to what Ian has said. Yes, Camera is now "unmounted" for fundamental EI9 features. So now we can't give any update promises.
  14. Hi James, Ian Correct, simply QTVR output requires all rendered frames, but with network here is a single/actual frame only - no data to make QTVR
  15. Hi, splitpoint, All Please save your time, unfortunately it will not work under Win7 :@ CLCS (Calculating Surfaces) relies on system realloc() call does not change memory block location. But it's not so in modern OS like Win7 (same as in Windows 64-bits). So this fix will be available in further version of Camera. Now please install older Windows vers to another partition.
  16. Hi, monday1313 Yes, everything is received and already fixed. Unfortunately, it was a series of bugs not only in the shader, but also in Camera. Thus you can't use the fix immediately Thanks
  17. Hi Reuben Thx for the help Because v7 was built with older compiler, but v8 with MSVC 2008 and requires run-time libraries installed.
  18. Hello Brian We confirm: yes, it's a bug. Fixed, thx for the report
  19. Hi, splitpoint, All It works (a small attached movie) but it's a very simple test where everything (model and mdd) is just fine. It happens not near often with exports, so more complex data are welcome.Project.img.zip
  20. Hi All Yesterday we worked with one of bug prjs and we've noticed the scene is loading slowly. Animator's windows appear Ok but then we see a delay like 10-15 seconds. It's slow for the scene with 3.5 millions of facets on Intel Mac 2.66. Also we see Animator eats almost 700 Mb of RAM. We tried to learn where is a bottleneck. In this concrete case it's preview of textures. There are 699 textures "applied" in the scene. Absolute most of them are same textures applied to different (duplicated) objects. So in fact the prj has only 20-30 "original" textures (or less yet). However, Animator's texture preview is not optimized, so all 699 copies are loaded (no matter most of them are same). Note also that with optimized textures preview it's possible to have their preview sizes larger. Now any texture is downsampled to 256x256, this simplified image is used for OGL or software preview. Often it looks very blurred We would like to see this improvement in EI9 User Pack
  21. Hi, monday1313 No, it was not fixed in EI8 Don't understand a relation with glow and alpha. Please explain more. If you render in strips - this artifact is unavoidable. But if (as you wrote) with single Camera - so it's definitely a bug that should be fixed. For a separated shader the fix can come before EI9. But anyway a minimal sample prj is needed. We remember a scene with apple that had same prob. but after several hours of RT shadows on our old G4 700 we still could not figure out what's wrong there. So please help with project. Thanks
  22. Hello A discussion "does app need a dongle?" can be very long and very philosophic :rolleyes: So let us avoid it but say about other (pure technical) aspect: it's a really huge work to remove dongle (same as to add it, not less). Plus after (imagine) dongle is removed - we can see other probs (that we can't even predict now). Thus we prefer do not wake up a sleeping dog and use months for a productive development instead of dongle-fighting. But there is a rational kernel here: EIAS should have an attractive demo version (of course without any dongle). That we 100% agreed and already working with it.
  23. Hello Brian Project + EXR file please. It will not be fixed tomorrow but considered in next beta.
  24. Hola Diego We think a "texture filter" is an interested theme (no matter it's not new). Maybe it can be done via graph controls for each of - hue (HSV) - saturation (HSV) - valie (HSV) - RGB - alpha - red - green - blue - bump (?) It's also interested to apply effect selectively, for exampe by distance, by edge density etc. Anyway we need: - see users' interest/enthusiasm :rolleyes: - stuff of ideas/propositions - time to write such shader (very busy, so can't promise it will be fast)
  25. Hello Brian We would like to hear your propositions how to improve EXR2Mesh but first please look at beta2 vers of the plug-in :rolleyes: One thing: in Camera (but not in Animator) it's possible to crete a dense mesh nearby camera and low-resolution mesh in other areas (although some probs in animation are possible). If you think it's interested / usable, we're waiting for your UI design screenshots.
×
×
  • Create New...