Open Inventor FAQs - Viewer/User-interface


How can I handle double-clicks in Open Inventor?

Double-clicks can be handled thanks to the class SoMouseButtonEvent. The method isButtonDoubleClickEvent(const SoEvent* e, SoMouseButtonEvent::Button whichButton) can tell whether the event was a double click for the specified button.

Moreover, it is possible to get the "double-click" state of a button. The state DBCLK of the class SoButtonEvent has been introduced in order to provide such information.


How is the functionality behind the viewer controls accessed in Open Inventor?

Zooming is accomplished by changing the field-of-view in the Perspective Camera (heightAngle field), and by changing the view frustum height in an Orthographic camera (height field).

The dolly is accomplished by moving the camera closer to the object or farther away from the object, i.e., moving the position of the camera.

Rotating the object is really moving the camera around the stationary object in a circular trajectory. The position of the camera and the direction the camera is pointing is constantly changing -- the radius of the trajectory is kept constant (obtained from the focalDistance).

Translation is moving the camera along a line perpendicular to the direction it is pointing, keeping the direction the camera is pointing constant.

The SoGuiAlgoViewer class contains all the utlity tools to compute the viewer actions independently of the Open Inventor viewers.


How do I select the type of camera (perspective or orthographic) through the Examiner viewer class?

You can select the type of camera used at viewer creation time (see setCameraType), but you cannot change the camera type dynamically as the method used by the Examiner Viewer button is not a public method.

If you want to allow the user to dynamically change back and forth between an ortho and perspective camera (the same way the Examiner Viewer provides this ability through one of the decoration buttons), you need to be concerned about keeping the same size of an object when zooming in on it with one camera and then switching to another. The following (slightly simplified) code is what the Inventor viewers use to dynamically change camera type.

void toggleCameraType()
{
  if (camera == NULL)
    return;

  // create the camera of the opposite kind and compute the wanted height
  // or heightAngle of the new camera.
  SoCamera *newCam;
  if (camera->isOfType(SoPerspectiveCamera::getClassTypeId())) {
    float angle = ((SoPerspectiveCamera *)camera)->heightAngle.getValue();
    float height = camera->focalDistance.getValue() * ftan(angle/2);
    newCam = new SoOrthographicCamera;
    ((SoOrthographicCamera *)newCam)->height = 2 * height;
  }
  else if (camera->isOfType(SoOrthographicCamera::getClassTypeId())) {
    float height = ((SoOrthographicCamera *)camera)->height.getValue() / 2;
    float angle = fatan(height / camera->focalDistance.getValue());
    newCam = new SoPerspectiveCamera;
    ((SoPerspectiveCamera *)newCam)->heightAngle = 2 * angle;
  }

  newCam->ref();

  // copy common stuff from the old to the new camera
  newCam->viewportMapping = camera->viewportMapping.getValue();
  newCam->position = camera->position.getValue();
  newCam->orientation = camera->orientation.getValue();
  newCam->aspectRatio = camera->aspectRatio.getValue();
  newCam->focalDistance = camera->focalDistance.getValue();

  // search for the old camera and replace it by the new camera
  SoSearchAction sa;
  sa.setNode(camera);
  sa.apply(sceneRoot);
  SoFullPath *fullCamPath = (SoFullPath *) sa.getPath();
  if (fullCamPath) {
    SoGroup *parent = (SoGroup *)fullCamPath->getNode(fullCamPath->getLength() - 2);
    parent->insertChild(newCam, parent->findChild(camera));
    SoCamera *oldCam = camera;
    setCamera(newCam);

    // remove the old camera if it is still there (setCamera() might
    // have removed it) and set the created flag to true (for next time)
    if (parent->findChild(oldCam) >= 0)
      parent->removeChild(oldCam);
    createdCamera = TRUE;
  }

  newCam->unref();
  }

Is there any other way to know when a traversal ends and everything has been rendered to the viewer, other than adding a callback node at the last node of the scenegraph?

The official way would be to subclass the renderArea/viewer and override the redraw() method, something like this:

void myViewer::redraw()
{
  // Call my parent class's method to do the work
  SoWinExaminerViewer::redraw();

// Now the rendering and buffer swap has been done...
} 

As it turns out there are a couple ways to avoid the inconvenience of subclassing the viewer.

The easiest way is to set a framesPerSecond callback and set the number of samples (ie. frames) to 1. This will effectively cause the FPS callback to be called following the buffer swap on every frame. There is some overhead involved for doing timing and computing FPS in this case. The other way is to get the viewer's SceneManager, set your own render callback, and have your render callback call the viewer's render() method. You have to call render() in this case because redraw() is protected, but render() just turns around and calls redraw(). You will need to pass the viewer pointer as "user data" in this case since the callback needs to be a static function. For example:

static void
myRenderCB( void *userData, SoSceneManager *mgr )
{
  // Do the r ender
  SoWinExaminerViewer *pVwr = (SoWinExaminerViewer *)userData;
  pVwr->render();

  // Rendering and buffer swap are done
}

...and in your setup code...

SoWinExaminerVie wer *pVwr = new SoWinExaminerViewer;
pVwr->getSceneManager()->setRenderCallback(myRenderCB, (void*)pVwr );

When I read my VRML model into (for example) the SceneViewer, and then I try to spin it, it flies offscreen. In general, it seems hard to control the camera with the hand cursor if I'm dealing with a VRML model.

Here's the quick solution:

Click the ViewAll button. (You may want to click the ResetHome button also.)

Here's the technical explanation:

In the ExaminerViewer, the current camera has a position and a "focal point". In this context "focal point" does not literally mean where the camera is focused. Unlike a real camera, we do not have "depth of field". In the ExaminerViewer, the focal point is the point around which the camera rotates. The focal point is defined by the camera's "focal distance", "position" and "orientation" fields. In other words it is ocated "focal distance" units from the camera "position" in the direction the camera is pointing.

When the scene graph does not contain a camera, the viewer creates one. In this case the viewer automatically calls the camera's viewAll() method after adding the camera to the scene. This method gets the bounding box of the entire scene, adjusts the camera so the entire scene is visible, and sets the focal point at the geometric center of the scene. This usually results in camera rotations behaving "as expected". The user can cause the viewAll() method to be called by clicking on a button in the standard viewer decorations (if they are visible).

If the scene graph already contains one or more cameras, the viewer uses the first one it finds. If the focal distance specified for this camera is not appropriate for the scene, the user may see "unexpected" behavior when rotating the camera.

If the scene graph does not contain a camera node, but contains one or more VRML Viewpoint nodes, then the viewer will create a camera using the properties of the first viewpoint node. But the VRML Viewpoint node does not have a "focal distance" field, so the viewer has to "guess" at this value. Again the user may see unexpected behavior.


Is there a way to use a keyboard or a mouse event in an Examiner Viewer? (I could do it in a Render Area.)

They work in viewers too, but... only when the viewer is NOT in viewing mode.

When the viewer is in viewing mode, the viewer handles mouse and keyboard events (to rotate the camera and so on).

If you want to see mouse/button events before the viewer sees them, use the "raw event" callback (SoWinRenderArea::setEventCallback). You will have to handle system-dependent events, but X and MS-Windows mouse/button events are very similar.


[Win32] Why does changing the scene graph not trigger a redraw?

If you are not using SoWin::mainLoop and not using IVF, then you may need to call SoWin::doIdleTasks in your application to ensure that Open Inventor's "idle queue" is processed. See following topic "When/How should my application call SoWin::doIdleTasks".


[Win32] When/How should my application call SoWin::doIdleTasks?

If you're using SoWinRenderArea or any of its derived classes, e.g. SoWinExaminerViewer, then Inventor automatically creates an SoSceneManager which creates a node sensor and attachs it to the root of the scene you specified with setSceneGraph(). When you change the scene graph (and autoredraw is enabled, which is true by default), the sensor is triggered and the SceneManager "schedules" a redraw. This is the same as if your app called the scheduleRedraw() method. It schedules a sensor in the "idle queue". Conceptually the sensors in the idle queue are triggered when Inventor detects that the app is "idle" (whatever that means) and then a redraw occurs.

Now the question is: how/when does Inventor know that the app is "idle" and process the Inventor idle queue?

If you are using SoWin::mainLoop (or nextEvent/dispatchEvent) it is handled for you. Otherwise, on Windows at least, it requires a little cooperation from the application. We have packaged the "things to be done when the app is idle" in the method SoWin::doIdleTasks. Calling this method will process the idle queue and so on. If this method is never called, then autoRedraw (automatically redrawing on a scene graph change) may not work.

If you are writing a straight Win32 program, you probably have an explicit Windows message loop in your app. An easy way to detect "idle" is when the message queue is empty. SoWin's nextEvent() method does this:

DWORD qstatus = GetQueueStatus( QS_ALLINPUT );
if (qstatus == 0)
  SoWin::doIdleTasks(); 

before calling GetMessage (because GetMessage blocks when the queue is empty!) and other usual Windows stuff. This is about the same as calling PeekMessage.

If you are writing an MFC program, it's easier because MFC has its own idea of what "idle" means. You just override the OnIdle handler in your app class, e.g. CMyApp::OnIdle(), and call SoWin::doIdleTasks in there. If you look at the simple MFC examples (.../src/Inventor/examples/mfc/...) you'll see this. Of course if you used the IVF AppWizard we provide to create your MFC (+IVF) program, then you don't need to do anything. The AppWizard automatically overrides OnIdle and inserts the call to doIdleTasks.


How can I turn off viewing mode in my application so that the user cannot move or spin the scene?

If you do not want the user to ever be able to interact with the camera, then you should use a Render Area instead of a viewer, and add in the features you need, such as selection.

Another approach is to subclass the viewer and override the behaviors you don't want, or want to change. An example of overriding a viewer is supplied with the Open Inventor installation:

src\Inventor\examples\Techniques\ViewerButtons


When I click the seek button on the viewer to set a new center of rotation for the camera I want to store the center of rotation position. Where is it stored? Is there an easy way to get this information?

The center of rotation (aka focal point) is not explicitly saved. It is calculated when needed by adding the camera "focalDistance" to the camera "position" in the direction the camera is currently pointing ("orientation").

Given a ptr to the camera node, the math looks like this:

SbMatrix mx;
mx = camera->orientation.getValue();
SbVec3f forward(-mx[2][0], -mx[2][1], -mx[2][2]);
SbVec3f pt = camera->position.getValue() + forward * camera->focalDistance.getValue();