Microsoft Build 2020 Live Stream

The Microsoft 2020 Live Stream starts at about 08:00 Pacific time on 19 May 2020. This year Build will be a free online event. If you haven’t already it isn’t too late to register. Microsoft will have sessions going around the clock for 48 continuous hours. Even though it is online some sessions do have a capacity.  Go register and choose your sessions at https://mybuild.microsoft.com/. The link for the live stream is below.

 

ASP.NET Core: Pushing Data over a Live Connection

Today I am creating an application in ASP.NET Core 3.1.  The application needs to to continually push out updated information to any clients that are actively connected.  There are a few possible ways to do this.  I have decided to use PushStreamContent for this.

With this class I can have an open HTTP connection over which data can be arbitrarily pushed.  The stream itself is raw.  I could push out text, binary data, or anything else that I wish to serialize.  This means that any parsing and interpretation of data is my responsibility, but that is totally fine for this purpose.  But how do I use PushStreamContent to accomplish this?

I will start by creating a new ASP.NET Core project.

ASP.netCoreNewProject

For the ASP.NET Core Project type there are several paths to success.  The one that I am taking is not the only “correct” one.  I have chosen the template for “Web Application (Model-View-Controller)”.  A default web application is created and a basic HTML UI is put in place.

ASP.NETCoreProjectType

Now that the project is created, there are a few configuration items that we need to handle.  Startup.cs requires some updates since we are going to have WebAPI classes in this project.

ASP.NETCoreMVCProjectFiles

Within Startup.cs there is a method named ConfigureServices.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();
}

In this method an additional line is needed to support WebApiConventions.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();
    services.AddMvc().AddWebApiConventions();
}

For the sake of development the CORS headers for the application also need some updating.  Further down in Startup.cs is a method named Configure.

app.UseCors(builder =>  builder.AllowAnyHeader().AllowAnyOrigin().AllowAnyMethod());

If you are using Visual Studio as an editor you might see that AddApiWebConventions() has a red underline and an error about that method being undefined.  To resolve this there is a NuGet packet to add to the project.  Right-click on Dependencies, select “NuGet Package Manager.” Click on the “Browse” tab and search for a package named Microsoft.AspNetCore.Mvc.WebApiCompactShim.

Microsoft.aspNetCore.Mvc.WebApiCompactShim

After that is added the basic configuration is complete.  Now to add code.

Right-click on the Controllers folder and select “Add New”, “Controller.”  For the controller type select “API Controller – Empty.”  For the controller name I have selected “StreamUpdateController.”  There are three members that will need to be on the controller.

  • Get(HttpRequestMessage) – the way that a client will subscribe to the updates.  They come through an HTTP Get.
  • OnStreamAvailable(Strea, HttpContent, TransportContext) – this is a static method that is called when the response stream is ready to receive data.  Here it only needs to add a client to the collection of subscribers.
  • Clients – a collection of the currently attached clients subscribing to the message.

For the collection of clients an auto-generated property is sufficient.  A single line of code takes care of its implementation.

private static ConcurrentBag Clients { get; } = new ConcurrentBag();

The Get() method adds the new connection to the collection of connections and sets the response code and mime type.  The mime type used here is text/event-stream.

public HttpResponseMessage Get(HttpRequestMessage request)
{
    const String RESPONSE_TYPE = "text/event-stream";
    var response = new HttpResponseMessage(HttpStatusCode.Accepted)
    {
        Content = new PushStreamContent((a, b, c) =>
        { OnStreamAvailable(a, b, c); }, RESPONSE_TYPE)
    };
    response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue(RESPONSE_TYPE);
    return response;
}

The static method, OnStreamAvailable, only has a couple of lines in which it adds the stream to the collection with a StreamWriter wrapped around it.

private void OnStreamAvailable(Stream stream, HttpContent content,
    TransportContext context)
{
    var client = new StreamWriter(stream);
    clients.Add(client);
}

As of yet there is not anything being written to the streams.  To keep this sample simple all clients will receive the same data to their streams.  Everything is in place to start writing data out to clients.  I will use a static timer deciding when to write something to the stream.  Every time its period has elapsed all clients connected to the stream will receive a message.

        private static System.Timers.Timer writeTimer = new System.Timers.Timer() { Interval = 1000, AutoReset = true };

        static StreamUpdateController()
        {
            writeTimer.Start();
            writeTimer.Elapsed += TimerElapsed;
            writeTimer.Start();
        }

The handler for the timer’s elapsed event will get the current date and print it to all client streams.

static async void TimerElapsed(object sender, ElapsedEventArgs args)
{
    var dataToWrite = DateTime.Now.ToString();
    foreach(var clientStream in Clients)
    {
        await clientStream.WriteLineAsync(dataToWrite);
    }
}

You can now test the stream out.  If you run the project and navigate to https://localhost:yourPort/api/StreamUpdate you can see the data being written…

…well, kind of.  The browser window is blank.  Something clearly has gone wrong, because there is no data here.  Maybe there was some type of error and we are not getting an error message.  Or perhaps the problems was caused by… wait a second, there is information on the screen.  What took so long?

The “problem” here is that writes are buffered.  This normally is not a problem and can actually contribute to performance improvements.  But we need for that buffer to be transmitted when a complete message is within it.  To resolve this the stream can be requested to clear its buffer. Stream.Flush() and Stream.FlushAsync() will do this.  The updated TimerElapsed method now looks like the following.

static async void TimerElapsed(object sender, ElapsedEventArgs args)
{
    var dataToWrite = DateTime.Now.ToString();
    foreach(var clientStream in Clients)
    {
        await clientStream.WriteLineAsync(dataToWrite);
        await clientStream.FlushAsync();

    }
}

Run the program again and open the URL to now see the messages rendered to the screen every second.  But how does a client receive this?

While it is easy for .Net programs to interact with this, I wanted to consume the data from HTML/JavaScript.  Provided that you use a certain data format there is an object named EventSource that makes interacting with the output from this stream easy.  I have not used that format.  The option available to me is to use XmlHttpRequest.  If you have used XmlHttpRequest before, then you know that after a request is complete an event is raised making the completed data available to you.  That does not help for getting the chunks of available data.  The object has another event of help, onprogress.

When onprogress is fired the responseText member has the accumulated data.  The length field indicates how long the accumulated data is.  Every time the progress event is raised the new characters added to the string can be examined to grab the next chunk of data.

function xmlListen() {
    var xhr = new XMLHttpRequest();
    var last_index = 0;
    xhr.open("GET", "/api/Stream/", true);
    //xhr.setRequestHeader("Content-Type", "text/event-stream");
    xhr.onload = function (e) {
        console.log(`readystate ${xhr.readyState}`);
        if (xhr.readyState === 4) {
            if (xhr.status >=200 && xhr.status <= 299) {                                  console.log(xhr.responseText);             } else {                 console.error(xhr.statusText);             }         }     };     xhr.onprogress = (p) => {

        var curr_index = xhr.responseText.length;
        if (last_index == curr_index) return;
        var s = xhr.responseText.substring(last_index, curr_index);
        last_index = curr_index;
        console.log("PROGRESS:", s);
    }
    xhr.onerror = function (e) {
        console.error(xhr.statusText);
    };
    xhr.send(null);
}

This works.  But I do have some questions about this implementation.  The XmlHttpRequest‘s responseText object appears to be just growing without end.  For smaller data sizes this might not be a big deal, but since I work on some HTML applications that may run for over 24 hours it could lead to unnecessarily increased memory pressure.  That is not desirable.  Let us go back to EventSource

The data being written is not in the right format for EventSource.  To make the adjustment, the data must be proceeded by the string “data: ” and followed by two new line characters.  That is it.  A downside of conforming to something compatible with EventSource is that all data must be expressible as text.

static async void TimerElapsed(object sender, ElapsedEventArgs args)
{
    var dataToWrite = $"data: {DateTime.Now.ToString()}\n\n";
    foreach(var clientStream in Clients)
    {
        await clientStream.WriteLineAsync(dataToWrite);
        await clientStream.FlushAsync();
    }
}

On the client side the following will create an EventSource that receives those messages.

function startEventSource() {
    var source = new EventSource('/api/Push/');
    source.onmessage = (message) => {
        console.log(message.id, message.data);
    }
    source.onerror = (e) => {
        console.error(e);
    }
    source.onopen = () => {
        console.log('opened');
    }
}

I would still like to be able to stream data less constrained through the same service.  To do this instead of having a collection of StreamWriter objects I have made a class to hold the streams along with another attribute that indicates the format in which data should be.  A client can specify a format through a query parameter.

enum StreamFormat
{
    Text,
    Binary
}
class Client
{
    public Client(Stream s, StreamFormat f)
    {
        this.Stream = s;
        this.Writer = new StreamWriter(s);
        this.Format = f;
    }

    public Stream Stream { get;  }
    public StreamWriter Writer { get; }
    public StreamFormat Format { get; }

}

public HttpResponseMessage Get(HttpRequestMessage request)
{
    var format = request.RequestUri.ParseQueryString()["format"] ?? "text";
    const String RESPONSE_TYPE = "text/event-stream";
    HttpResponseMessage response;
    if (format.Equals("binary"))
    {
        response = new HttpResponseMessage(HttpStatusCode.Accepted)
        {
            Content = new PushStreamContent((a, b, c) =>
            { OnBinaryStreamAvailable(a, b, c); }, RESPONSE_TYPE)
        };
    }
    else
    {
        response = new HttpResponseMessage(HttpStatusCode.Accepted)
        {
            Content = new PushStreamContent((a, b, c) =>
            { OnStreamAvailable(a, b, c); }, RESPONSE_TYPE)
        };
    }           
    return response;
}

static void OnStreamAvailable(Stream stream, HttpContent content, TransportContext context)
{                        
    Clients.Add(new Client(stream, StreamFormat.Text));
}

static void OnBinaryStreamAvailable(Stream stream, HttpContent content, TransportContext context)
{
    Clients.Add(new Client(stream, StreamFormat.Binary));
}

        static async void TimerElapsed(object sender, ElapsedEventArgs args)
        {
            var data = new byte[] { 0x01, 0x02, 0x03, 0x04, 0x10, 0x40 };

            List unsubscribeList = new List();
            foreach(var client in Clients)
            {
                try
                {
                    if (client.Format == StreamFormat.Binary)
                        await client.Stream.WriteAsync(data, 0, data.Length);
                    else 
                        await client.Writer.WriteLineAsync($"data: {ByteArrayToString(data)}\n\n");
                    await client.Writer.FlushAsync();
                } catch(Exception exc)
                {
                    unsubscribeList.Add(client);
                }
            }
            Clients.RemoveAll((i) => unsubscribeList.Contains(i));
        }
twitterLogofacebookLogoyoutubeLogoInstagram Logo

Linked In

 

 

 


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Introduction to Direct 2D: DirectWrite

In my previous installment of this series I was rebuilding a video game interface.  The interface resembled my target, but did not have any text.  How can I render text with Direct2D?  The answer is: DirectWrite.

What is DirectWrite?

DirectWrite is a GPU accelerated text rendering API that runs on top of Direct2D.  It originally shipped with Windows 7 and (as of the publishing date of this article) receives updates from Microsoft in the Windows 10 updates.  It is a great solution for rendering text within a DXGI based program.  Surfaces are generally dependent on the device.  DirectWrite resources are device independent.  There is no need to recreate them if the surface is lost.

Text Format

There is no default text style.  When text is being rendered it must have a definite font, style, size, and weight.  All of these text attributes are packaged in IDWriteTextFormat.  If you have worked with CSS think of IDWriteTextFormat as being a style for text.  In this interface you select the font face, weight, size, and text alignment.

TOF(_pWriteFactory->CreateTextFormat(
	TEXT("Segoe UI"),	//font family
	NULL,				//font collection
	DWRITE_FONT_WEIGHT_EXTRA_BOLD,
	DWRITE_FONT_STYLE_NORMAL,
	DWRITE_FONT_STRETCH_NORMAL,
	40.0f,
	TEXT("en-us"),
	&_pDefaultFont
));

Drawing Text

I will discuss two ways to render a text string: DrawText and DrawTextLayout.  The easier method of rendering a text string is to use ID2D1RenderTarget::DrawText.  This method accepts a text format object and a bounding rectangle.  It renders the text string at the location specified by the bounding rectangle.  It also accepts optional arguments that affect text layout and metrics.  This is the easiest of the two methods for rendering text, but it does not support having mixed styles within the same text string.  For that you would need to use a text block rendered in IDWriteTextLayout with ID2D1RenderTarget::DrawTextLayout (discussed in the next section).  Text rendered with this method will appear center-aligned to the bounding rectangle in which it is placed.

std::wstring test = TEXT("TEST");
D2D1_RECT_F textLocation{20,200,800,800};
_pRenderTarget->DrawTextW(
	test.c_str(), test.size(), //The text string and length
	_pDefaultFont.Get(),  //The font and style description
        &textLocation,  //The location in which to render the text
	_pBackgroundBrush.Get(),  //The brush to use on the text
	D2D1_DRAW_TEXT_OPTIONS_NONE, 
	DWRITE_MEASURING_MODE_NATURAL
);

Text Layout

A text layout serves a similar function as a text block in that it is used for showing text.  The primary difference is that a text block has a one-to-one relationship with what is rendered on the screen and a text layout (through the IDWriteTextLayout interface) can be rendered several times.  If there were labels that were used repeatedly with the same text, they could be implemented by creating an IDWriteTextLayout object and rendering it several times.

//In InitDeviceIndependentResources()

std::wstring  stringResult = L"RESULT";
TOF(_pWriteFactory->CreateTextLayout(
	stringResult.c_str(),
	stringResult.size(),
	_pDefaultFont.Get(),
	400, 90, 
	&_pTextLayout
));

//In OnRender()
_pRenderTarget->DrawTextLayout({ 00,0 }, _pTextLayout.Get(), _pBackgroundBrush.Get());
ID2D1AppWindow_DWrite_TextLayout
The resulting text from the above code

Unlike text rendered with ID2D1RenderTarget::DrawText, text rendered with IDWriteTextLayout can have styling applied to specific ranges of letters within the text.  If you need to mix text styles, use IDWriteTextLayout instead of using ID2D1RenderTarget::DrawText.  Create a TextLayout initially using a text format that covers the majority of the text in your layout.  Where deviations to your selected default should apply, create a DWRITE_TEXT_RANGE instance.  DWRITE_TEXT_RANGE contains the index of the starting character for the new text styling and the number of characters to which it should be applied.  Here are some of the functions for adjusting text styling.

When creating the IDWriteTextLayout a width and a height for the text area are needed.  Content rendered within this rectangle will automatically be wrapped.

std::wstring  stringResult = L"RESULT";
TOF(_pWriteFactory->CreateTextLayout(
	stringResult.c_str(),
	stringResult.size(),
	_pDefaultFont.Get(),
	400, 90, 
	&_pTextLayout
));
DWRITE_TEXT_RANGE range{ 2,3 };
_pTextLayout->SetFontWeight(DWRITE_FONT_WEIGHT_EXTRA_LIGHT, range);

ID2D1AppWindow_DWrite_TextLayout_range

Applying DirectWrite to the Project Interface

Where I left off, I had used Direct2D to render color to areas in which the interface will show information.

D2DAppWindow_sonicInterface

The next step is to apply text to the interface.  This interface will ultimately be used for performing batch processing of some media files.  The details and implementation of that processing are not shown here and are not important since this is only showing an interface.  The text populating the interface in this example is for demonstrative purposes only.

As shown in creating the interface, I am making a list of shape types to render.  In the call to OnRender() there is a case statement that will make the necessary calls depending on the next shape type in the list.  Between the last post and this one I changed my shape representation to have a shape base class and subclasses instead of having a single struct with the additional data needed packaged in unioned elements.

I am using  IDWriteTextLayout to render text instead of ID2D1RenderTarget::DrawText. Since DrawText centers the text within the bounding rectangle I could not get the layout to be what I wanted. Using IDwriteTextLayout in combination with layout options I was able to achieve the layout that I was looking for.

With the development of a hierarchy for the types of items that I am rendering, I am almost tempted at this point to just start defining a control hierarchy.  While that would have utility it may also distract from the concepts being shown here.  A control hierarchy may be written some other day.  For now the hierarchy I am using for shapes is shown is this code.

interface  IDispose {
	virtual void Dispose() = 0;
};

struct Shape: public IDispose {
	Shape() {}

	Shape(ShapeType shapeType, PaletteIndex paletteIndex)
	{
		this->shapeType = shapeType;
		this->paletteIndex = paletteIndex;
	}
	virtual void Dispose()
	{

	}
	std::wstring tag;
	ShapeType shapeType;
	PaletteIndex paletteIndex;
};

struct TextShape :public Shape {
	TextShape() {}
	TextShape(std::wstring text, D2D1_RECT_F location, TextStyle textStyle = TextStyle_Label, PaletteIndex palette = PaletteIndex_Primary, 
		DWRITE_TEXT_ALIGNMENT textAlignment = DWRITE_TEXT_ALIGNMENT_LEADING, 
		DWRITE_PARAGRAPH_ALIGNMENT paragraphAlignment = DWRITE_PARAGRAPH_ALIGNMENT_CENTER)
		:Shape(ShapeType_Text, palette)
	{
		this->text = text;
		this->location = location;
		this->textStyle = textStyle;
		this->paragraphAlignment = paragraphAlignment;
		this->textAlignment = textAlignment;
	}

	void Dispose() override 
	{
		this->textLayout = nullptr;
	}
	std::wstring text;
	D2D1_RECT_F location;
	TextStyle textStyle;
	ComPtr textLayout;
	DWRITE_PARAGRAPH_ALIGNMENT paragraphAlignment;
	DWRITE_TEXT_ALIGNMENT textAlignment;

};

struct RectangleShape : public Shape {
	RectangleShape() :Shape(ShapeType_Rectangle, PaletteIndex_Primary) {

	}
	RectangleShape(D2D1_RECT_F rect, PaletteIndex p) :
		Shape(ShapeType_Rectangle, p)
	{
		this->rect = rect;
	}
	D2D1_RECT_F rect;
};

struct EllipseShape : public Shape {
	EllipseShape():Shape(ShapeType_Ellipse, PaletteIndex_Primary)
	{}
	EllipseShape(D2D1_ELLIPSE ellipse, PaletteIndex p = PaletteIndex_Primary)
		:Shape(ShapeType_Ellipse, p)
	{
		this->ellipse = ellipse;
	}

	D2D1_ELLIPSE ellipse;
	
};

The OnRender() method has now been modified to reflect the new struct hierarchy and now has additional code for rendering text.

switch ((*current)->shapeType)
{
	case ShapeType::ShapeType_Rectangle: 
	{
		std::shared_ptr r = std::static_pointer_cast(*current);
		_pRenderTarget->FillRectangle(r->rect, brush.Get());
		break;
	}
	case ShapeType_Ellipse: 
	{
		std::shared_ptr e = std::static_pointer_cast(*current);
		_pRenderTarget->FillEllipse(e->ellipse, brush.Get()); 
		break;
	}
	case ShapeType_Text:
	{
		std::shared_ptr t = std::static_pointer_cast(*current);
		if (t->textLayout)
		{
			ComPtr format = _pDefaultFont;
			//_pRenderTarget->DrawTextW(t->text.c_str(), t->text.size(), format.Get(), t->location, brush.Get());
			D2D1_POINT_2F p = { t->location.left, t->location.top };
			_pRenderTarget->DrawTextLayout(p, t->textLayout.Get(), brush.Get());
		}
		break;
	}
	default:
		break;
}

And now the UI looks more complete.

D2D1_UI_With_Text

There are adjustments to be made with the margins on the text and positioning.  Those are going to be ignored until other items are in place.  One could spend hours making adjustments to positioning.  I am holding off on small adjustments until the much larger items are complete.

The complete source code can be found in the following commit on Github.

https://github.com/j2inet/CppAppBase/tree/5189359d50e7cdaee79194436563b45ccd30a88e

There are a few places in the UI where I would like to have vertically rendered text.  A 10 to 15 degree tilt has yet to be applied to the entire UI.  I would also like to be able to display preview images from the items being processed.  My next posts in this series will address: Displaying Images in Direct2D and Applying Transformations in Direct2D.

twitterLogofacebookLogo   youtubeLogo

Run .Net Core on the Raspberry Pi

The .NET framework acts as an intermediate execution environment; with the right runtime a .Net executable that was made on one platform can run on another. With .NET Core the focus on the APIs that are supported on a variety of platforms allows it to be supported on even more platforms. Windows, Linux, and macOS  are operating systems on which the .NET Core framework will run.

The .NET Core Framework also runs on ARM systems. It can be install on the Raspberry Pi. I’ve successfully installed the .NET CORE framework on the Raspberry Pi 3 and 4. Unfortunately it isn’t supported on the Raspberry Pi Zero; ARM7 is the minimum ARM version supported. The Pi Zero uses an ARM6 processor. Provided you have a supported system you can install the framework in a moderate amount of time. The instructions I use here assume that the Pi is accessed over SSH. To begin you must find the URL to the version of the framework that works on your device.

Visit https://dotnet.microsoft.com to find the downloads. The current version of the .NET Core framework is 3.1. The 3.1 downloads can be found here. For running and compiling applications the installation to use is for the .NET Core SDK. (I’ll visit the ASP.NET Core framework in the future). For the ARM processors there is a 32-bit and 64-bit download. If you are running Raspbian use the 32-bit version even if the target is the 64-bit Raspberry Pi; Raspbian is a 32-bit operating system. Since there is no 64-bit version yet the 32-bit .NET Core SDK installation is necessary. Clicking on the link will take you to a page where it will automatically download. Since I’m doing the installation over SSH I cancel the download and grab the direct download link. Once you have the link SSH into the Raspberry Pi.

You’ll need to download the framework. Using the wget command followed by the URL will result in the file being saved to storage.

wget https://download.visualstudio.microsoft.com/download/pr/ccbcbf70-9911-40b1-a8cf-e018a13e720e/03c0621c6510f9c6f4cca6951f2cc1a4/dotnet-sdk-3.1.201-linux-arm.tar.gz

After the download is completed it must be unpacked. The file location that I’ve chosen is based on the default location that some .NET Core related tools will look by default for the .NET Core framework. I am using the path /usr/share/dotnet. Create this folder and unpack the framework to this location.

sudo mkdir /usr/share/dotnet
sudo tar zxf dotnet-sdk-3.1.201-linux-arm.tar.gz -C /usr/share/dotnet

As a quick check to make sure it works go into the folder and try to run the executable named “dotnet.”

cd /usr/share/dotnet
./dotnet

The utility should print some usage information if the installation was successful.  The folder should also be added to the PATH environment variable so that the utility is available regardless of the current folder. Another environment variable named DOTNET_ROOT will also be created to let other utilities know where the framework installation can be found.  Open ~/.profiles for editing.

sudo nano ~/.profile

Scroll to the bottom of the file and add the following two lines.

PATH=$PATH:/usr/share/dotnet
DOTNET_ROOT=/usr/share/dotnet

Save the file and reboot. SSH back into the Pi and try running the command dotnet. You should get a response even if you are not in the /usr/share/dotnet folder.

With the framework installed new projects can be created and compiled on the Pi. To test the installation try making and compiling a “Hellow World!” program. Make a new folder in your home directory for the project.

mkdir ~/helloworld
cd ~/helloworld

Create a new console project with the dotnet command.

dotnet new console

A few new elements are added to the folder. If you view the file Program.cs the code has default to a hello world program. Compile the program with the dotnet command

dotnet build

After a few moments the compilation completes. The compiled file will be in ~/helloworld/Debug/netcoreapp31. A file name  helloworld is in this folder. Type it’s name to see it run.

My interest in the .NET Core framework is on using the Pi as a low powered ASP.NET Web server. In the next post about the .NET Core Framework I’ll setup ASP.NET Core and will host a simple website.

To see my other post related to the Raspberry Pi click here.

Introduction to Direct2D:Part 1

DirectX is a family of APIs that provide functionality related to multimedia. Some of the APIs are focused on sound, some on input, and some on graphics. Today I want to present on of the graphical APIS based on the Direct X Graphics Infrastructure (DXGI) known as Direct2D. The Direct2D API is for rendering graphics in 2D (as suggested by the name). The API takes advantage of the features in the graphics card for accelerating the rendering.

When working with Direct2D you don’t create the objects directly. Instead you will use a factory method that creates the objects and returns interfaces to the object or you will use those returned interfaces to requested additional objects. The interfaces returned by the Direct2D APIs are COM pointers. COM, or Component Object Model is an interface for making components that interact with each other. The standard has existed since 1993. Despite being 30 years old it is important to Windows and used more than one might think within .NET.  There are books that cover how COM works. Some of them are old, but don’t let that make you think that the book’s information isn’t relevant. A full discussion of COM is outside of the scope of this post.

To get an initialized window with which to work I’ll be using the sample Win32/C++ code that I presented in the post. See that post for more information about how the base window works. In my derived class I’ll begin initializing Direct2D specific objects.

Adding Update and Render Methods

The AppWindow sample implements a message pump that uses PeekMessage instead of GetMessage. This allows the application to do other things instead of halting when there are no messages to be processed. When there are no messages a method named Idle() is called. This method will be repurposed for adding an update/render loop.

There will be 4 methods added: Update(), OnUpdate(), Render(), and OnRender(). Both OnUpdate() and OnReader()  are virtual; these methods contain code that could be completely replaced or overridden if this code were re-purposed for another application.  The other two methods, Update and Render, contain code that needs to be called on every update cycle and rendering cycle that I would not want affected when derived applications are overriding the rendering or the updating. The Update method also is querying the performance timer to calculate the amount of time that has passed since the last update. This information could be used to know how far to move an object in a scene on the next frame update or to calculate the FPS.

D2D1AppWindow_Idle_cpp

Initializing D2D

You’ll see two variations of the D2D Interfaces. One version is prefixed with D2D1 (note the numeral 1 at the end). The D2D1 interface was released with Windows 8 and is the one that I’ll be using here.  There’s two interfaces that we need to initialized. ID2D1Factory, which will be the object from which we directly or indirectly make the other D2D1 objects, and ID2D1HwndRenderTarget, which is an interface that we use for painting our scene onto the window object created by the application base.  As we create objects something to keep in mind is two classifications into which DXGI objects can be classified (this isn’t limited to Direct2D, but applicable to other DirectX graphical APIs). An object could be a Device Dependent resource or a Device Independent Resource. What’s the difference?

Device Dependent and Device Independent Resources

Device in this context refers to the video rendering device. This will usually be the video card but in some scenarios could be a purse software device or even refer to a resource that is hosted on another computer. To keep things simple let’s just view Device as being synonymous with video card or video adapter. Some resources allocate resources on the device (video adapter). But these resources could be lost at any moment if the state of the device changes (changing the resolution of the device will do this) or if the device gets disconnected (yes, a video adapter could be disconnected while the program is still running). If this occurs then none of the device dependent resources is valid.

Device Independent Resources do not allocate resources on the video card. These resources use RAM allocated to the CPU and are not affected by changes int he display configuration.

When a device is lost all of the device dependent resources must be released and recreated. In some programs a valid and simple way to do this is to simply terminate the program and start a new instance of the program. While an easy solution the program will loose any context of what the user was doing unless the program also saves state and restores this context. For scenarios in which I’ve used this solution the access to the computer hardware was restricted making the loss of a device an extremely low frequency event (something that might happen once a day at 3am in the morning while automated maintenance is doing something to the computer).

Examples of objects that are device independent resources include the ID2D1Factory object that is used for creating the other objects, Geometry objects used for describing shapes (objects that implement the ID2D1Geometry interface), and stroke style objects (ID2D1StrokeStyle).

Examples of Device Dependent Resources include ID2D1Device (which refer to the device that itself was lost), brush resources (implement IBrush), and ID2D1Layer (for creating rendering layers).

Object Creation

When initializing my D2D objects I have two methods in which the initialization occurs. The method InitDeviceResources() initialized the deveice dependent resources. InitDeviceIndependentResources()  creates the other resources. During a program’s lifecycle InitDeviceIndependentResources() will only be called once. InitDeviceResources() will be called at least once but could be called many other times.  To ensure that these resources are released there’s an additional method, DiscardDeviceResources(), that releases all of the device dependent resources. Since the ComPtr type is being used setting the pointer to nullptr will result in the resources being released.

D2DWindow_Init

The call to AppWindow::Init() results in the creation of the window. The initial implementations for InitDeviceIndependentResources() and InitDeviceResources() will create the ID2D1Factory object and create a RenderTarget for rendering to the screen.

d2dAppWindow_InitDeviceResources_cpp

The InitDeviceIndependentResources will create the D2D1Factory(). The other resources will be created in InitDeviceResources() and cleared in DiscardResources();

Loosing the Device

During the render phase of the application the various calls made result in commands being queued to be executed on the video card to render our scene.  The commands are not executed until we end our drawing (which actually ends adding commands to the queue). If the connection to the device were lost when we end our drawing the function call to EndDraw() returns a value that can be examined to detect that the device has disconnected.  If the returned value is D2DERR_RECREATE_TARGET the resources must be reinitialized.

d2dAppWindowDeviceLost_cpp

Rendering

In DXGI applications  drawing is performed using render target objects. There are several types of render targets. We will be using an ID2D1RenderTarget in this sample application.  We have a render target that renders to the application window. Render targets could also render to off screen surfaces, textures, or remote machines.

In the previous code sample a render target was already being used to clear the application window and fill it with the color blue. The general usage pattern will look like the following.

ID2D1RenderTarget* myRenderTarget;
myRenderTarget->BeginDraw();
//call draw functions here
myRenderTarget->EndDraw();

Let’s draw a square. There are three things that we need; a description of the square’s geometry, a brush to draw the square with, and a color to assign to the brush. If you’ve used Win32’s GDI (Graphical Device Interface) you’ve seen the term “Brush” used before in a similar sense. A Brush is a system resource that contains information on what color to use when drawing something. A brush could be for rendering a solid color or it could be used to use portions of a bitmap to fill a shape.

The struct D2D1_RECT_F is used to describe the square here (also available, D2D1_RECT_U and D2D1_RECT_L). This structure contains the elements left, top, right, and bottom. Any D2D1_RECT_F instances that are created reside within the CPU’s memory.  The color to be used with the brush is also assigned in a struct that only uses the CPUs memory. The type D2D1_COLOR_F  is used here, with the values red, green, blue, and alpha of type float.

For the brush there are GPU resources that will be consumed. It is a device dependent resource. We request that a brush be created and receive a pointer to its interface by calling ID2D1RenderTarget::CreateSolidColorBrush. The method arguments are the color that the brush should be and a pointer to the object that will receive the interface address.

d2d1AppWindow_CreateResources

To render the square the OnRender() method requires one or two additional calls. When rendering a geometry there are two types of methods available. One type will outline the geometry and the other is to fill the shape (which results in a solid shape). To render a geometry that has both a solid fill and an outline first use the fill method and then the draw method as I do below.

D2D1AppWindow_DrawSquare

There are a number of other geometries that ID2D1RenderTarget supports; Ellipses, rectangles, rounded rectangles, text, and other constructed geometries.

Resizing the Window

The square renders, but if you resize the window you’ll see a problem. As the window is shrunk on one axis or the other the square will shrink on that axis; the square will become a rectangle with uneven sides. Fixing this is easy enough. The RenderTarget must be updated to reflect the new window size. Since this sample application is derived from the CppAppBase that I presented in a previous post there exist a resize method that can be overridden to to update the render target. ID2D1RenderTarget::Resize accepts a D2D1_SIZE_U struct and updates the render target size accordingly.

void D2D1AppWindow::OnResize(UINT width, UINT height)
{
	this->_size = D2D1_SIZE_U{ width, height };
	if (_pRenderTarget)
	{
		_pRenderTarget->Resize(_size);
	}
}

There is still an annoying behaviour. While the window is being resize the screen is blank. Part of the reason for this is that the rendering only occurs when there are no messages to process and resizing generates messages. One of the messages generated while resizing is a WM_PAINT message; this is telling the application to redraw itself. If we call OnRender() in response to a WM_PAINT message this will handle the screen updates while resizing. The AppWindow base class has an OnPaint() method that is called when a WM_PAINT message comes through. Overriding this method and adding a call to OnRender() results in a window that doesn’t go blank when resizing.

void D2D1AppWindow::OnPaint()
{
	this->OnRender();
}

Rendering a Shape List

One could write code to individually render each one of the primitives needed to make up a more complex scene. For applications with complex scenes doing this is especially undesirable. Instead a better solution is have the primitives that must be called encoded within a file. If a change needs to be made to a scene the change would be made with the file instead of in code.

For the next sample I”m going to move halfway to such a solution. I’m making a list of primitives to be rendered and am populating the list from code. With a bit more work the data could be externalized. I’m using the list to start rebuilding an interface. I was playing a video game on my Nintendo Switch. The game had a screen that displayed between levels that shows how well someone did. I thought the interface was both simple and æsthetically pleasing.

ES4msOWUcAAzhAP

Creating something similar to this is my goal. It is composed of several rectangles, a few concentric circles, and text. Were the interface all the same type of geometry elements this could be defined with a simple list of attributes about the rectangles. Since there is more than one type of geometry the list elements must identify a geometry type. I’ve made a struct that has a field that identifies the shape type (Rectangle or Ellipse for now),  identifies which of the colors in my 3 color palette that the item should be, and has the information for the shape itself. The rectangle data and ellipse data are in a union to minimize the amount of memory used for each item.

enum   PaletteIndex {
	Primary = 0,
	Secondary = 1,
	Background = 2
};

enum ShapeType {
	ShapeType_Rectangle = 0, 
	ShapeType_Ellipse = 1
};

struct ColoredShape
{
	static ColoredShape MakeEllipse(D2D1_ELLIPSE e, PaletteIndex p)
	{
		ColoredShape c{ 0 };
		c.shapeType = ShapeType::ShapeType_Ellipse;
		c.ellipse = e;
		c.paletteIndex = p;
		return c;
	}

	static ColoredShape MakeRectangle(D2D1_RECT_F r, PaletteIndex p)
	{
		ColoredShape c{ 0 };
		c.shapeType = ShapeType::ShapeType_Rectangle;
		c.rect = r;
		c.paletteIndex = p;
		return c;
	}

	union  {
		D2D1_ELLIPSE ellipse;
		D2D1_RECT_F rect;
	};
	ShapeType shapeType;
	PaletteIndex paletteIndex;
};

The factory methods make it easier to initialize instances of the value. I’ve got a vector that holds these structs. It is populated with the data for each rectangle and ellipse to be rendered. The items will be rendered in the same order that they show in the list. If two items overlap the latter of the items to be rendered will be on top.

void D2D1AppWindow::InitDeviceIndependentResources()
{
	TOF(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, IID_PPV_ARGS(&_pD2D1Factory)));
	_primaryColor = D2D1::ColorF(0.7843F, 0.0f, 0.0f, 1.0f);
	_secondaryColor = D2D1::ColorF(0.09411, 0.09411, 0.08593, 1.0);
	_backgroundColor = D2D1::ColorF(0.91014, 0.847655, 0.75f, 1.0);
	_mySquare = { 20, 20, 30, 30 };

	
	_shapeList.push_back(ColoredShape::MakeRectangle({0,0,456,104 }, Primary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,128,456,550 }, Primary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 488,0,508,110 }, Secondary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 488,40,900,115 }, Secondary ));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,130,850,180 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,130,850,180 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,190,850,240 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,190,850,240 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,250,850,300 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,250,850,300 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,350,600,450 }, Secondary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,350,600,450 }, Secondary));

	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,60, 60 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,57, 57 }, Background));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,55, 55 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,52, 52 }, Background));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 476,470,870, 550 }, Secondary));
}

In the OnRender() method this list is iterated and each element is drawn.

void D2D1AppWindow::OnRender()
{
	HRESULT hr;
	if (!_pRenderTarget)
		return;
	_pRenderTarget->BeginDraw();
	_pRenderTarget->Clear(_backgroundColor);
	_pRenderTarget->FillRectangle(&_mySquare, _pPrimaryBrush.Get());
	_pRenderTarget->DrawRectangle(&_mySquare, _pSecondaryBrush.Get());

	for (auto current = _shapeList.begin(); current != _shapeList.end(); ++current)
	{
		ComPtr brush;
		switch (current->paletteIndex)
		{
		case Primary: brush = _pPrimaryBrush; break;
		case Secondary: brush = _pSecondaryBrush; break;
		case Background: brush = _pBackgroundBrush; break;
		default: brush = _pPrimaryBrush;
		}
		switch (current->shapeType)
		{
		case ShapeType::ShapeType_Rectangle:_pRenderTarget->FillRectangle(&current->rect, brush.Get()); break;
		case ShapeType::ShapeType_Ellipse:_pRenderTarget->FillEllipse(&current->ellipse, brush.Get()); break;
		}
		
	}
	
	hr = _pRenderTarget->EndDraw();
	if (hr == D2DERR_RECREATE_TARGET)
	{
		// The surface has been lost. We need to recreate our
		// device dependent resources. 
		this->DiscardDeviceResources();
		this->InitDeviceResources();
	}
}

Running the program gives me something that looks like it was inspired by the video game interface.

D2DAppWindow_sonicInterface

There are similarities, but there are also plenty of differences. Aside from the missing text in the video game interface the text is rotated slightly. With the way rectangles are defined now there’s no apparent way to render a rectangle that isn’t aligned with the X or Y axis. To move forward we need to know how we can apply transformations to the rendering and how to render text. I’ll continue there in the next part of this series.

The complete source code for the program can be found on github.

https://github.com/j2inet/CppAppBase

 

Win32/C++ Application Base

Win32/C=++

Win32 is necessary for development in many current technologies.  Some of my current projects utilize Win32/C++ applications.  Many of them use a similar pattern for starting the application.  Initializing the window that hosts these applications is not particularly interesting, but it is a foundation block for some of my applications that I will share in the future.

In Win32 UI programs there will be at least one function defined that is known as being of type WNDPROC.  A WNDPROC is a callback function.  It is the primary function through which Windows will communicate to an application various events.  When a user moves their mouse over an application WNDPROC is called.  When the user presses a key WNDPROC is called.  When an application is being notified to render its UI again WNDPROC is called.  The call signature for WNDPROC is shown below.

LRESULT CALLBACK WindowProc(
  _In_ HWND   hwnd,
  _In_ UINT   uMsg,
  _In_ WPARAM wParam,
  _In_ LPARAM lParam
);

The first argument in this function is a handle to the window that is the intended recipient of the message.  An application could have more than one window.  But an application with only one window will have the same value.  The second argument is a numerical value that identifies the event or message being sent.  The constants that define possible vales generally are prefixed by WM_ (meaning Windows Message).  Examples of such values include: WM_COMMAND, which is generally the result of a button being clicked; WM_PAINT, which tells the application to redraw its UI; WM_SIZE, meaning that the window has been resized; and WM_CLOSE, meaning that there was a request to close the window.

Because of the central position that the WNDPROC function plays in receiving notifications from the operating system, if all responses to the operating system were handled here the function would quickly grow big.  I have seen this done, and it can get to be a nightmare on shared projects.  One way to deal with this is to have the WNDPROC function act only as a method that routes these incoming messages to other functions that handle the method.  A simple way to implement this is with a switch statement where each case does a minimal amount of work before passing the message on to a function specifically made to handle the message.

That is the solution used in the projects I plan to share in the future.  I created an abstract base class named AppWindow that creates an application window; has a few methods for handling certain Windows messages; and has case statements for calling those methods in response to a Windows message.

Windows communicates these messages to an application through a message queue.  An application must retrieve the next message to begin processing it.  If an application does not retrieve any messages for a certain period of time, then Windows assumes the application is locked up or busy and may give you a prompt to either terminate or wait for the program.  To keep messages going through the queue an application must implement a message pump.  In a simple implementation of a message pump two functions are called in a loop.  GetMessage which receives a MSG struct containing the information on a message and DispatchMessage to have the message handed off to the application’s WNDPROC for processing. 

If there are no messages available, calling GetMessage will result in the application waiting until there is a message.  During this wait, the main thread of an application will not use any CPU bandwidth.  In some applications, such as games or other applications that are continuously performing UI updates, using GetMessage is undesirable since it can cause the UI thread to wait.  In such applications, PeekMessage can be used instead. With PeekMessage, unlike GetMessage, if there are no messages available PeekMessage will not block.  Its return value indicates whether or not a new message was available.  If there are no messages to be processed the application can perform other tasks.

An application may modify a message within the time that it was retrieved with PeekMessage or GetMessage and when it is passed off through DispatchMessage.  You will see the use of a method named TranslateMessage which despite its name, does not perform the modification of any messages.  Instead, it creates new messages in response to certain virtual key messages and adds these to the message queue.  For the programs that I present, the specifics of what it does are not important and I will not be modifying any messages in the message pump.

I do not use any of the traditional Win32 UI elements in the programs that I will be sharing (buttons, labels, text boxes, list boxes, etc.) but those elements could easily be created within the application.  If I were making a framework for such controls, I would probably make classes for each one.  My base class does have a CreateButton and CreateLabel method that are used for debug purposes.

AppWindow_header
The AppWindow.h file

This class is abstract and cannot be directly initialized.  So, there first must be a derived class.  The only methods that the derived class must absolutely implement are the constructor and the method GetWindowsClassName().  The constructor must accept a HINSTANCE argument and pass it to the AppWindow base class constructor. GetWindowClassName() must return a unique string.  This string is going to be used for registering the window class.

DerivedWindow_h
Full implementation of a derived window

To make an application that runs, all that is left to do is create a derived application window, call its Init() method (which is where it actually creates its Window) and call its message pump.

derivedWindow_main_cpp

Running the application will result in a window showing a label and a button.  It does not do anything yet, which is what is desired.  The code samples to be posted in the future, require that there be an initialized window (which this provides).  The entirety of the code can be found on GitHub.

Relevant Github Commit

Latest Version