Sometimes it is convenient to get a rough estimate of the 3d position of whatever is under the mouse cursor.
It is possible to get that information by reading the depth value corresponding to the mouse screen location.
The method Workspace.ScreenToWorld() does exactly that.
The following code sample demonstrates how you could use this approach to show an arrow always pointing in the direction of the surface normal under the mouse cursor:
public class MyDesign : Design
{
// the entity we'll use to represent the
// surface normal direction
private Entity _hoverMesh;
private Transformation _hoverMeshTransfInv = new Transformation(1.0); /* identity */
private Entity CreateHoverMesh(double size)
{
Mesh m = devDept.Eyeshot.Entities.Mesh.CreateArrow(
0.1 * size, 0.6 * size,
0.2 * size, 0.4 * size,
16, devDept.Eyeshot.Entities.Mesh.natureType.Smooth);
m.TransformBy(new Rotation(UtilityEx.PI_2, Vector3D.AxisMinusY)); // The arrow should point towards positive Z
m.Color = Color.FromArgb(160, 200, 0, 180);
return m;
}
protected override void OnMouseMove(MouseEventArgs e)
{
base.OnMouseMove(e);
// creating a mesh to represent the surface normal
if (_hoverMesh == null)
{
_hoverMesh = CreateHoverMesh(Entities.BoxSize.Diagonal * 0.05);
_hoverMesh.Regen(0);
}
// the arrow will be drawn as a temp entity: since TempEntities are drawn as an
// overlay, it's not necessary to re-draw the scene every time the cone position
// is updated.
if (!TempEntities.Contains(_hoverMesh)) TempEntities.Add(_hoverMesh);
// *** Normal Reconstruction from Depth ***
// ----------------------------------------------
// screen to world to reconstruct an approximated
// world normal from 3d scene depth.
// (so an approximation of another approx.)
// o
// | // X -> mouse center
// X---o // o -> offset sample pts
//
// normal = cross(ptOffsetX - center, ptOffsetY - center);
int offset = 5; // in pixels
Point[] pts = new[]
{
e.Location,
e.Location + new Size(offset, 0),
e.Location + new Size(0, -offset)
};
var mousePosWorld = ScreenToWorld(pts);
if (mousePosWorld.Any(p => p == null)) return; // empty pixels
Transformation translTr = new Translation(mousePosWorld[0].AsVector); // [0] is mouse position
Vector3D vx = (mousePosWorld[1] - mousePosWorld[0]).AsVector;
Vector3D vy = (mousePosWorld[2] - mousePosWorld[0]).AsVector;
vx.Normalize();
vy.Normalize();
Vector3D normal = Vector3D.Cross(vx, vy);
normal.Normalize();
// -----------------------------------------------
// creating a transformation so that the arrow (pointing towards positive Z in object space)
// points towards "dir" in world space.
Align3D orientationTr = new Align3D(Plane.XY, new Plane(Point3D.Origin, normal));
// in sequence: reset transformation -> apply orientation -> apply translation
var newTr = translTr * orientationTr;
_hoverMesh.TransformBy(newTr * _hoverMeshTransfInv);
_hoverMesh.Regen(0);
// draw the Temp Entity
PaintBackBuffer();
SwapBuffers();
// store updated transformation (so that can be un-done)
_hoverMeshTransfInv = (Transformation) newTr.Clone();
_hoverMeshTransfInv.Invert();
}
}
Notes
- Normal reconstruction from depth is affected by what gets written in the depth buffer: be aware of edges (they write on depth) and transparent geometry (unless Workspace.WriteDepthForTransparents is true, they are not written onto depth).
- This method is fast but inherently inaccurate (24-bit depth values, are read as short, and precision is not uniform across the camera's z range due to perspective projection).
- With perspective cameras, depth precision is affected by the near and far parameters (see how the near plane distance affects perspective projection).
You could incur precision issues in some scenes. To solve the problem, you could try setting Camera.NearPlaneDistanceFactor to higher values, at the cost of a more aggressive clipping.
Comments
Please sign in to leave a comment.