Loading

Bits & Bytes

Posts Tagged ‘C#’

Using C# Delegates to Call Static and Instance Functions

A delegate allows a programmer to abstract functions as variable values in the same way that he abstracts other data, such as integers and doubles, as variable values. Simply put, a delegate is a variable type that defines a specific type of function. An instance of the delegate can hold a reference to any function of that type.

Basic Steps for Using Delegates

  1. Declare a delegate type
  2. Create an instance of the type
  3. Assign that instance to a function
  4. Call the function via the delegate

The essential steps for using a delegate are listed above and demonstrated in the program below. The program consists of two files: Program.cs and CMyDelegateTester.cs–the additional class file is only needed for the second example.

For the first example, we can layout the steps very easily. First, we have the delegate declaration just above the Main() function, which designates our delegate type, DDoSomething. Next, we have the instantiation, pfnFunction, of the delegate type right after the first comment inside the Main() function. This instance is assigned to the static Square() function in the same line. Finally, we call the Square() function via the delegate in the next line with the value 3.0 and output the result.

In the second example, we do the same thing with an instance function of the CMyDelegateTester class. Notice that we need to instantiate the class and that we assign the function, along with its object, to the delegate. This is important because the delegate is attached to and will depend on the object in this case.

Executing the program, we see the output for each function call:

UsingDelegates

Program.cs

using System;

namespace UsingDelegates {
    class Program {

        delegate double DDoSomething(double dX);

        static void Main(string[] args) {

            // 1. An example using a static function
            DDoSomething pfnFunction = Square;
            Console.WriteLine("The static function returned " + pfnFunction(3.0));

            // 2. An example using a member function
            CMyDelegateTester qTesterObject = new CMyDelegateTester();
            DDoSomething mpfnMemberFunction = qTesterObject.MultiplyByTen;
            Console.WriteLine("The member function returned " + mpfnMemberFunction(3.0));
        }

        static double Square(double dX) {
            return dX * dX;
        }
    }
}

CMyDelegateTester.cs

namespace UsingDelegates {
    public class CMyDelegateTester {

        public CMyDelegateTester() {
        }

        public double MultiplyByTen(double dX) {
            return 10.0 * dX;
        }
    }
}

C# Delegates Versus C++ Function Pointers

A C# delegate is similar to a C++ function pointer. However, there are some subtle differences:

  1. C# delegates require the creation of a new data type. In fact, a delegate declaration is equivalent to a C++ typedef declaration of a function pointer type. While C++ does not require a new type definition to use function pointers, it is good practice.
  2. Like C++ function pointers, C# delegate types are detemined by the arguments and the return value. However, C++ distinguishes between static and instance functions and does not allow them to be used interchangeably as C# does. This is demonstrated in the C# code above.
  3. A C# delegate with a return type of void may be multicast to call multiple functions with one call. This is odd, but it is used with events and listeners, and will be illustrated in a future C# post.

Rendering Transparent 3D Surfaces in WPF with C#

The primary problems that arise when rendering semi-transparent 3d objects in Windows Presentation Foundation have to do with false z-buffer occlusions. Specifically, when a transparent surface or polygon is rendered, it sets the z-buffer depth values to block objects that are behind it from being rendered, even though they should show through the transparent layer.

In WPF with C#, the z-buffer is not accessible. So, it can not be disabled during transparent rendering. Instead, we must render the transparent objects last so that they are layered over the rest of the scene and the objects behind them show through.

RotatingTransTetra

Below, I have a program for the single code file that I used to generate the spinning, transparent tetrahedron shown above. The C# project that I used is a simple Console Application project with the libraries PresentationCore, PresentationFramework, and WindowsBase references added to it as I showed in a prior post: Using WPF in a C# Console Application. The Main() function creates the Window for the program and calls TransparentScene() to do all of the rendering.

Inside the function TransparentScene(), I create the camera, the light, the animated rotation transformation, the tetrahedron geometry, and then use that geometry to specify three tetrahedrons. The first tetrahedron is called the Inner Tetrahedron because it is scaled to fit inside the others. The second tetrahedron is called the Outer Tetrahedron and is semi-transparent. The third tetrahedron is also part of the Outer Tetrahedron, but consists of the opaque back faces. Note that it only makes sense to render the back faces because the front faces are semi-transparent. Otherwise, the back would not be visible.

At the end the code, I use the following lines to add the tetrahedrons to the scene:

            qModelGroup.Children.Add(qBackGeometry);
            qModelGroup.Children.Add(qInnerGeometry);
            qModelGroup.Children.Add(qOuterGeometry);

Notice that the transparent “Outer Geometry” layer is added last. This is necessary to avoid false occlusions.

For comparison, I have included the image below with four different arrangements. The first (top-left) shows the scene with the transparent outer layer added before the inner and after the back. The second (top-right) shows the transparent outer layer added before both the inner and the back layers. The third (bottom-left) shows the transparent layer added before the back and after the inner layer. The last (bottom-right) shows the scene with the transparent layer added last as it is in the code.

TransparentComparison

Program.cs

using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Media3D;
using System.Windows.Media.Animation;

namespace WpfTransparent {
    class Program {
        [STAThread]
        static void Main(string[] args) {
            Window qWindow = new Window();
            qWindow.Title = "Transparent Rendering";
            qWindow.Width = 400;
            qWindow.Height = 300;
            qWindow.Content = TransparentScene();
            qWindow.ShowDialog();
        }

        static Viewport3D TransparentScene() {
            // Define the camera
            PerspectiveCamera qCamera = new PerspectiveCamera();
            qCamera.Position = new Point3D(0, .25, 2.25);
            qCamera.LookDirection = new Vector3D(0, -.05, -1);
            qCamera.UpDirection = new Vector3D(0, 1, 0);
            qCamera.FieldOfView = 60;

            // Define a lighting model
            DirectionalLight qLight = new DirectionalLight();
            qLight.Color = Colors.White;
            qLight.Direction = new Vector3D(-0.5, -0.25, -0.5);

            // Define the animated rotation transformation
            RotateTransform3D qRotation =
                new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 1));
            DoubleAnimation qAnimation = new DoubleAnimation();
            qAnimation.From = 1;
            qAnimation.To = 361;
            qAnimation.Duration = new Duration(TimeSpan.FromMilliseconds(5000));
            qAnimation.RepeatBehavior = RepeatBehavior.Forever;
            qRotation.Rotation.BeginAnimation(AxisAngleRotation3D.AngleProperty, qAnimation);

            // Define the geometry
            const double kdSqrt2 = 1.4142135623730950488016887242097;
            const double kdSqrt6 = 2.4494897427831780981972840747059;
            // Create a collection of vertex positions
            Point3D[] qaV = new Point3D[4]{
                new Point3D(0.0, 1.0, 0.0),
                new Point3D(2.0 * kdSqrt2 / 3.0, -1.0 / 3.0, 0.0),
                new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, -kdSqrt6 / 3.0),
                new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, kdSqrt6 / 3.0)};
            Point3DCollection qPoints = new Point3DCollection();
            // Designate Vertices
            // My Scheme (0, 1, 2), (1, 0, 3), (2, 3, 0), (3, 2, 1)
            for (int i = 0; i < 12; ++i) {
                if ((i/3) % 2 == 0) {
                    qPoints.Add(qaV[i%4]);
                } else { 
                    qPoints.Add(qaV[(i*3)%4]);
                }
            }
            // Designate Triangles
            Int32Collection qTriangles = new Int32Collection();
            for (int i = 0; i < 12; ++i ) {
                qTriangles.Add(i);
            }
            Int32Collection qBackTriangles = new Int32Collection();
            // Designate Back Triangles in the opposite orientation
            for (int i = 0; i < 12; ++i) {
                qBackTriangles.Add(3 * (i / 3) + (2 * (i % 3) % 3));
            }

            // Inner Tetrahedron: Define the mesh, material and transformation.
            MeshGeometry3D qFrontMesh = new MeshGeometry3D();
            qFrontMesh.Positions = qPoints;
            qFrontMesh.TriangleIndices = qTriangles;
            GeometryModel3D qInnerGeometry = new GeometryModel3D();
            qInnerGeometry.Geometry = qFrontMesh;
            // *** Material ***
            DiffuseMaterial qDiffGreen =
                new DiffuseMaterial(new SolidColorBrush(Color.FromArgb(255, 0, 128, 0)));
            SpecularMaterial qSpecWhite = new
                SpecularMaterial(new SolidColorBrush(Color.FromArgb(255, 255, 255, 255)), 30.0);
            MaterialGroup qInnerMaterial = new MaterialGroup();
            qInnerMaterial.Children.Add(qDiffGreen);
            qInnerMaterial.Children.Add(qSpecWhite);
            qInnerGeometry.Material = qInnerMaterial;
            // *** Transformation ***
            ScaleTransform3D qScale = new ScaleTransform3D(new Vector3D(.5, .5, .5));
            Transform3DGroup myTransformGroup = new Transform3DGroup();
            myTransformGroup.Children.Add(qRotation);
            myTransformGroup.Children.Add(qScale);
            qInnerGeometry.Transform = myTransformGroup;

            // Outer Tetrahedron (semi-transparent) : Define the mesh, material and transformation.
            GeometryModel3D qOuterGeometry = new GeometryModel3D();
            qOuterGeometry.Geometry = qFrontMesh;
            // *** Material ***
            DiffuseMaterial qDiffTransYellow =
                new DiffuseMaterial(new SolidColorBrush(Color.FromArgb(64, 255, 255, 0)));
            SpecularMaterial qSpecTransWhite =
                new SpecularMaterial(new SolidColorBrush(Color.FromArgb(128, 255, 255, 255)), 30.0);
            MaterialGroup qOuterMaterial = new MaterialGroup();
            qOuterMaterial.Children.Add(qDiffTransYellow);
            qOuterMaterial.Children.Add(qSpecTransWhite);
            qOuterGeometry.Material = qOuterMaterial;
            // *** Transformation ***
            qOuterGeometry.Transform = qRotation;

            // Outer Tetrahedron (solid back) : Define the mesh, material and transformation.
            MeshGeometry3D qBackMesh = new MeshGeometry3D();
            qBackMesh.Positions = qPoints;
            qBackMesh.TriangleIndices = qBackTriangles;
            GeometryModel3D qBackGeometry = new GeometryModel3D();
            qBackGeometry.Geometry = qBackMesh;
            // *** Material ***
            DiffuseMaterial qDiffBrown =
                new DiffuseMaterial(new SolidColorBrush(Color.FromArgb(255, 200, 175, 0)));
            qBackGeometry.Material = qDiffBrown;
            // *** Transformation ***
            qBackGeometry.Transform = qRotation;

            // Collect the components
            Model3DGroup qModelGroup = new Model3DGroup();
            qModelGroup.Children.Add(qLight);
            qModelGroup.Children.Add(qBackGeometry);
            qModelGroup.Children.Add(qInnerGeometry);
            qModelGroup.Children.Add(qOuterGeometry);
            ModelVisual3D qVisual = new ModelVisual3D();
            qVisual.Content = qModelGroup;
            Viewport3D qViewport = new Viewport3D();
            qViewport.Children.Add(qVisual);
            qViewport.Camera = qCamera;

            return qViewport;
        }
    }
}

Automatic 3D Normal Vector Calculation in C# WPF Applications

The normal vectors that are used to calculate the light reflections for 3D graphics in Window Presentation Foundation are based on how the mesh geometry is specified. Of course, the normals (normal vectors) can be set manually, but when they are calculated automatically for vertices, the normals are formed by averaging the normals of the adjacent triangles as specified by the geometry graph. To illustrate, I have a tetrahedron with two different graph specifications, but the same vertices.

The C# project for this blog post is the same as it was in my prior blog post, and the code is similar as well. In that post, I used normals that were calculated as the average of the three adjacent triangle faces, and the color was smoothly interpolated across each face. In this post, I have changed the camera angle slightly, but I have used the same normal calculation in the code below. However, I have also include code for flat shading as well as the code for smooth shading for comparison. You can copy and paste the flat shading code over the smooth shading to try it out. A side-by-side comparison of the two is shown in the image below: left is smooth and right is flat.

Comparison

The code snippets below are comprised of the code for the two geometry specifications and our two code files: Program.cs and CScene3D.cs. Each geometry specifications can be pasted into the file CScene3D.cs over the other to replace it to set the shading accordingly. In the flat shading model, each vertex is specified three times: once for each of the triangle faces that contains it. In the smooth mode, each vertex is shared by three triangle faces; note the vertex indices for each triangle.

This explains how the normal calculations are made for each vertex. They are averaged over all of the triangles that contain the vertex. Since each vertex in the flat model is contained in exactly one triangle, the shading is specified by that single face–like the rest of the vertices in that triangle. That is why the shading is constant over each face.

To understand why either these models might be used, consider the purpose of the rendering. For a facetted surface, like a diamond, we would want to use the flat shading model to correctly illustrate the facets. For a smooth surface, like the Earth, we would want to use a smooth model to hide the polygons and make the surface appear smoother. In this program, we would probably want to used flat shading because the model is a tetrahedron. A tetrahedron is facetted, and it’s corners are so sharp that they can never appear smooth. So, smooth shading always looks strange for it.

Flat Model

            // Create a collection of vertex positions
            Point3DCollection qPoints = new Point3DCollection();
            // Triangle 1
            qPoints.Add(new Point3D(0.0, 1.0, 0.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, kdSqrt6 / 3.0));
            qPoints.Add(new Point3D(2.0 * kdSqrt2 / 3.0, -1.0 / 3.0, 0.0));
            // Triangle 2
            qPoints.Add(new Point3D(0.0, 1.0, 0.0));
            qPoints.Add(new Point3D(2.0 * kdSqrt2 / 3.0, -1.0 / 3.0, 0.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, -kdSqrt6 / 3.0));
            // Triangle 3
            qPoints.Add(new Point3D(0.0, 1.0, 0.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, -kdSqrt6 / 3.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, kdSqrt6 / 3.0));
            // Triangle 4
            qPoints.Add(new Point3D(2.0 * kdSqrt2 / 3.0, -1.0 / 3.0, 0.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, kdSqrt6 / 3.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, -kdSqrt6 / 3.0));
            // Designate Triangles
            Int32Collection qTriangles = new Int32Collection();
            // Triangle 1
            qTriangles.Add(0);
            qTriangles.Add(1);
            qTriangles.Add(2);
            // Triangle 2
            qTriangles.Add(3);
            qTriangles.Add(4);
            qTriangles.Add(5);
            // Triangle 3
            qTriangles.Add(6);
            qTriangles.Add(7);
            qTriangles.Add(8);
            // Triangle 4
            qTriangles.Add(9);
            qTriangles.Add(10);
            qTriangles.Add(11);

Smooth Model

            // Create a collection of vertex positions
            Point3DCollection qPoints = new Point3DCollection();
            qPoints.Add(new Point3D(0.0, 1.0, 0.0));
            qPoints.Add(new Point3D(2.0 * kdSqrt2 / 3.0, -1.0 / 3.0, 0.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, kdSqrt6 / 3.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, -kdSqrt6 / 3.0));
            // Designate Triangles
            Int32Collection qTriangles = new Int32Collection();
            // Triangle 1
            qTriangles.Add(0);
            qTriangles.Add(2);
            qTriangles.Add(1);
            // Triangle 2
            qTriangles.Add(0);
            qTriangles.Add(1);
            qTriangles.Add(3);
            // Triangle 3
            qTriangles.Add(0);
            qTriangles.Add(3);
            qTriangles.Add(2);
            // Triangle 4
            qTriangles.Add(1);
            qTriangles.Add(2);
            qTriangles.Add(3);

Program.cs

using System;
using System.Windows;

namespace ConsoleApplication {
    class Program {
        [STAThread]
        static void Main(string[] args) {
            Window qWindow = new Window();
            qWindow.Title = "WPF in Console";
            qWindow.Width = 400;
            qWindow.Height = 300;
            qWindow.Content = CScene3D.Test();
            qWindow.ShowDialog();
        }
    }
}

CScene3D.cs

using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Media3D;
using System.Windows.Media.Animation;

namespace ConsoleApplication {
    class CScene3D {
        // Animation - Tetrahedron (upright, looking slightly up from below)
        public static Viewport3D Test() {
            // Define the camera
            PerspectiveCamera qCamera = new PerspectiveCamera();
            qCamera.Position = new Point3D(0, -.5, 2);
            qCamera.LookDirection = new Vector3D(0, .3, -1);
            qCamera.UpDirection = new Vector3D(0, 1, 0);
            qCamera.FieldOfView = 60;

            // Define a lighting model
            DirectionalLight qLight = new DirectionalLight();

            // Define the geometry
            const double kdSqrt2 = 1.4142135623730950488016887242097;
            const double kdSqrt6 = 2.4494897427831780981972840747059;
            // Create a collection of vertex positions
            Point3DCollection qPoints = new Point3DCollection();
            qPoints.Add(new Point3D(0.0, 1.0, 0.0));
            qPoints.Add(new Point3D(2.0 * kdSqrt2 / 3.0, -1.0 / 3.0, 0.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, kdSqrt6 / 3.0));
            qPoints.Add(new Point3D(-kdSqrt2 / 3.0, -1.0 / 3.0, -kdSqrt6 / 3.0));
            // Designate Triangles
            Int32Collection qTriangles = new Int32Collection();
            // Triangle 1
            qTriangles.Add(0);
            qTriangles.Add(2);
            qTriangles.Add(1);
            // Triangle 2
            qTriangles.Add(0);
            qTriangles.Add(1);
            qTriangles.Add(3);
            // Triangle 3
            qTriangles.Add(0);
            qTriangles.Add(3);
            qTriangles.Add(2);
            // Triangle 4
            qTriangles.Add(1);
            qTriangles.Add(2);
            qTriangles.Add(3);

            MeshGeometry3D qMesh = new MeshGeometry3D();
            qMesh.Positions = qPoints;
            qMesh.TriangleIndices = qTriangles;
            // Apply the mesh to the geometry model.
            GeometryModel3D myGeometryModel = new GeometryModel3D();
            myGeometryModel.Geometry = qMesh;

            // Define the material for the geometry
            SolidColorBrush qBrush = new SolidColorBrush(Color.FromArgb(255, 0, 255, 0));
            DiffuseMaterial qMaterial = new DiffuseMaterial(qBrush);
            myGeometryModel.Material = qMaterial;

            // Define the transformation, if any. In this case, we use an animated transformation
            RotateTransform3D qRotation =
                new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 1));
            DoubleAnimation qAnimation = new DoubleAnimation();
            qAnimation.From = 1;
            qAnimation.To = 361;
            qAnimation.Duration = new Duration(TimeSpan.FromMilliseconds(5000));
            qAnimation.RepeatBehavior = RepeatBehavior.Forever;
            qRotation.Rotation.BeginAnimation(AxisAngleRotation3D.AngleProperty, qAnimation);
            myGeometryModel.Transform = qRotation;

            // Collect the components
            Model3DGroup qModelGroup = new Model3DGroup();
            qModelGroup.Children.Add(qLight);
            qModelGroup.Children.Add(myGeometryModel);
            ModelVisual3D qVisual = new ModelVisual3D();
            qVisual.Content = qModelGroup;
            Viewport3D qViewport = new Viewport3D();
            qViewport.Children.Add(qVisual);
            qViewport.Camera = qCamera;

            return qViewport;
        }
    }
}