What constitutes a valid curve or surface in SMLib?
There are two tolerances that are important for the user to understand: IW_EFF_ZERO, and dDistanceTolerance. IW_EFF_ZERO is defined to be 1.0e-12 and denotes when two double precision value are identical or when a value is zero. This value should not be changed unless you are moving to a higher or lower precision platform. Many of the methods take a distance tolerance (usually dDistanceTolerance). This tolerance is required to determine when two points in 3D space are identical. As a general guide line the value should be at least 100 times smaller than the objects on which it is applied. It also should be larger than IW_EFF_ZERO. The user of the software has total control over this tolerance. It has no predefined default and is not a global value. Any method or function which utilizes the distance tolerance allows the user to input it. The following interpretations of the distance tolerance are used:
There are other tolerances which are used in operations which perform numerical operations to a given accuracy. For example, the chord height and angle tolerance in tessellation. These values should be easily distinguishable from the distance tolerance.
Comment: "It is very common in SMLib routines to give 2 tolerances for an operation. The usual tol is a 'Distance' tolerance, and we usually use anywhere from 0.1 to 0.0001 This is often scaled by the diagonal of the Min-Max box....typically it would refer to how close a point or a curve has to be to be 'on' whatever it is.....
The other tol is an angle tolerance given in radians.....we usually use "20*P1/180". This has something to do with the angle between 2 tangents...or a tangent and a chord. It is an upper limit for subdividing a curve into linear pieces.....very small curvy segments might slip under the radar of the dTol above.
I dont think they should be treated as iron-clad guarantees, since they are used for sampling to see whether to subdivide further, or shorten the 'step' in a march. Crude though it may sound, I usually try some 'reasonable' value, look at the density of the result, and then change the value accordingly. Too much zeal for un-needed precision just creates curves and surfaces with gobs of points and causes difficulty downstream. You can easily be chasing what is just noise about a smooth curve. After that, if I need a precise measure, I take lots of points and compute closest values, and measure differences. "
Many of the curve and surface methods allow the user to input an interval (IwExtent1d) on the curve or a domain (IwExtent2d) on the surface. Basically this allows the user to limit the area of interest for a given operation to a subset of the natural boundary. For example, the user may intersect a portion of a curve with a portion of a surface. If the user does not want to limit the search, the natural interval and domain is available through corresponding methods on the curve and surface.
A curve or a surface may have internal continuity changes at the knots. Continuity changes may be caused by either duplicate knots or duplicate control vertices. There are methods on the curve and surface to compute the continuities. Except where stated explicitly (DropCurve on IwSurface), all methods will handle curves and surfaces with internal C0 continuity (discontinuous tangent directions within a curve or a tangent plane discontinuity within a surface). In other words they may find answers which lie on the boundary or internal discontinuity. For example, the minimization between a point and a surface may return a point internal to the surface, on the boundary of the surface, or along a discontinuous edge of the surface.
Another question one might answer is what happens if an answer is produced which is very close to a discontinuity. For example: what if a line intersects a surface near an internal discontinuity? The results obtained would be as if the surface or curve was broken up into homogenous pieces and intersected then the results combined. In our example one might get two intersections, one internal to the surface and one on the discontinuity. Because the answers are not identical both are kept. This conforms to our general philosophy of trying to not loose information. Sometimes this may cause us to return more information than the user needs, but this is better than returning not enough.
The software is now multi-processor compliant. The IwContext is very important object. It contains all global data such as the cache manager and temporary working memory for certain functions. Every time you create an object (IwObject and subclasses) on the heap you will need to use the overridden "new" operator that takes a context. The standard new operator has been made private to prevent creation of heap objects without a context.
Any method that creates an object takes a IwContext object as either an argument to a "Create" method or as an argument in the "new". The context must have scope which is outside of the scope of the existence of the object. Therefore creating a global context at the beginning using the following is recommended if you are keeping objects around between commands.
If you are just using objects locally you can put the context on the stack as follows:
This way all objects created in a given "Context" can be deleted by deletion of the corresponding "pool". Note that we do not provide a pool based memory manager you will have to purchase or develop one yourself. SmartHeap from MicroQuill does come with a pool based memory management system.
All memory management for the library is handled in the iwos_memory.h and iwos_memory.cpp files. You may modify iwos_memory.cpp if you wish to utilize a different memory mechanism. We also provide an interface to pool based management of memory contained in the IwMemPool object. There are a number of different ways to create an object.
On the stack - cleanup happens automatically via. destructor. In general you should avoid creating IwCurve and IwSurface and their subclasses on the stack. The array based objects (IwTArray, IwSArray, IwSolutionArray) typically should be created on the stack. If the array object is created on the heap you will need an additional context argument to the destructor.
On the heap - cleanup requires an explicit "delete" or utilization of a stack based clean up object
Here are some good "Rules of Thumb" to help you make decisions about how to use the Context:
There are three objects which are especially useful in the management and automatic clean up of heap based memory and objects. These objects will automatically delete objects when they go out of scope. You have the option of preventing them from deleting their contents by invoking the corresponding Clear method.
Note that the Caching Mechanism is now embedded in the IwContext object. The default is for a context to create a cache manager with a maximum of 1000 curve caches, 200 surface caches, 200 trimmed surface caches, and 200 brep caches. This is a fairly good sized cache and should be sufficient for most applications. You can increase it or decrease it by a factor of 10 and still be reasonably good. If you want to conserve memory decrease it by a factor of 10. If you want better performance for larger models, increase it by a factor of 10.
GSLib utilizes a caching system to improve performance. It caches the tessellated Bezier representation of curves and surfaces and constructs a hierarchical tree structure for fast traversal. You should always have caching turned on and it should have a minimum of 10 cache elements. That is it will keep the tessellation for the last 10 curves/surfaces which it has processed. Depending upon the performance/memory requirements for your application you may want to increase this number. The caching mechanism allows us to achieve a level of performance on global algorithms which is near that of the local Newton based algorithms.
GSLib utilizes an error return based error handling convention. We utilize a set of macros to simplify and hide most of the error processing. At the moment an error is detected an error handling function is called (see iwos_error.h). Currently it prints an error either to the debugger output window in Visual C++ or to stderr on other systems. We suggest that you leave the error printing turned on until you have debugged the software you are implementing using GSLib. Knowing where an error has occurred will also help us to diagnose and fix any problems with our software.
The following macros are the most heavily used (see iwos_error.h):
Most of the documentation of this software is in this file and in the actual source code. We try to make things self documenting. Once you figure out what is going on, you should be able to print out a copy of the include files and use that as your hard copy reference manual. Occasionally you may have to look at the source code for clarification.
Here are some of the rules we follow to keep things consistent and unambiguous:
There are many common method arguments which have nearly the same meaning no matter where they occur. The following is a list of some of those common arguments and what they stand for:
Solid Modeling Solutions, Inc. You may not distribute source code or documentation for this software outside of the company and the site which owns the license. The standard license agreement allows you to freely distribute object code in any application which does not contain a programmatic interface. All software and documentation is considered proprietary information and the intellectual property of Solid Modeling Solutions, Inc. This document contains trade secret information which is deemed proprietary.
Copyright © 1998-2010 Solid Modeling Solutions
All rights reserved.