Dynamically Rendering a Sound in HTML/JavaScript

Every so often I’ll code something not because it is something that I need, but because creating it is enjoyable. I stumbled upon something that I had coded some time ago and I thought it was worth sharing. I was experimenting with dynamically rendering sound using various technologies. This posts focuses on performing that within HTML/JavaScript. This post will explain the code.

When you are rendering sound in the browser before you can actually play it the user must interact with the page . This prevents a window that just opened in another tab or behind other windows from being noisy before a user knows about it.  If you were using this technology in a real solution you  would want to design your application such that you know the user has clicked, scrolled, or engaged in some other interaction at least once before your code begins doing anything.

To render audio you need a reference to the AudioContext object. On many browsers this can be found on window.AudioContext. On webkit browsers it can be found in window.webkitAudioContext. An easy way to get a reference to the right object is to coalesce the two.

var AudioContext = window.AudioContext || window.webkitAudioContext;

Now we can make our own instance of an AudioContext instance. When creating an AudioContext  two options that you can specify are the sample rate of the audio and the latency. The Latency setting allows you to balance power consumption on the machine with responsiveness to the audio that is being rendered.

var context = new AudioContext({ latencyHint: "interactive", sampleRate:48000});

From our audio context we can create our AudioBuffer objects. These are the objects that contain the waveform data for the sounds. An AudioBuffer object could have a different sample rate than the AudioContext used to make it. In general though I don’t suggest doing this.  An AudioBuffer object could have multiple channels. We will use a single channel (monaural). When creating the AudioBuffer we must specify the number of channels in the audio, the sample rate, and the count of samples in the buffer. The number of samples can directly be mapped to a time length. If I wanted to have a buffer that were 2 seconds long and I have a sample rate of 48KHz (48,000 samples per second) then the number of samples needed in the AudioBuffer is 2 * 48,000 or 96,000.

const SAMPLE_RATE = 48000;
const CHANNEL_COUNT = 1;
const SAMPLE_COUNT = 2 * SAMPLE_RATE; //for 2 seconds
var audioBuffer = context.createBuffer(CHANNEL_COUNT, SAMPLE_COUNT, SAMPLE_RATE);

The Web Audio API contains a function for generating tones. I’m not going to use it. Instead I’m going to manually fill this buffer with the data needed to make a sound. Values in the audio buffer are floating point values in the range of -1.0 to 1.0. I’m going to fill the buffer with a pure tone of 440 Hz.  The Math.Sin function is used here. The value passed to Math.Sin must be incremented by a specific value to obtain this frequency.  This value is used in a for-loop to populate the buffer.

const TARGET_FREQUENCY = 440;
const var dx = 1 / (SAMPLE_RATE * TARGET_FREQUENCY);

var channelBuffer = audioBuffer.getChannelData(1);
for(var i=0, x=0;i<SAMPLE_COUNT;++x, x+=dx) {
   channelBuffer[i] = Math.sin(x);
}

The AudioBuffer object is populated. In the next step we can play it. To play the AudioBuffer it gets wrapped in an AudioBufferSource and connected to the destination audio device.

var source = context.createBufferSource();
source.buffer = audioBuffer;
source.connect(context.destination);
source.start();

If you put all of these code samples in a JavaScript file and run it in your browser after you click the 440 tone will begin to play. It works, but let’s do something a little more interesting; I want to be to provide a list of frequencies and hear them played back in the order in which I specified them. This will give the code enough functionality to play a simple melody. To make it even more interesting I want to be able to have these frequencies play simultaneously. This will allow the code to play chords and melodies with polyphony.

The list of frequencies could be passed in the form of a list of numbers. I had specifying melodies that way; I’ve done it on the HP48G calculator.  Instead of doing that, I’ll make a simple class that will hold a note  (A,B,C,D,E,F, G), a number for the octave, and duration. An array of these will make up one voice within a melody. If multiple voices are specified they will all play their notes at once.  I’m switching from JavaScript to TypeScript here just to have something automated looking over my shoulder to check on mistakes. Remember that TypeScript is a super set of JavaScript. If you know JavaScript you will be able to follow along.  For fields that contain notes I’m constraining them to being one of the following values. Most of these are recognizable music notes. The exception is the letter R will will be used for rest notes (silence).

type NoteLetter = 'A♭'|'A'|'A#'|'B♭'|'B'|'C'|'C#'|'D♭'|'D'|'D#'|'E'|'F'|'F#'|'G♭'|'G'|'G#'|'R';
type Octave = number;
type Duration = number;
 
 

I’ve made a  dictionary to map each note letter to a frequency.  I also have a Note class that will contain the information necessary to play a single note.  Most of the fields on this class explain themselves (letter, octave, duration, volume). The two fields, rampDownMargin and rampUpMargin specify how long a note should take to come to full volume. When I initially tried this with no ramp time two notes that were side-by-side that had the same frequency sounded like a single note. Adding a time for the sound to ramp up to full volume and back down allowed each note to have its own distinct sound.

class Note {
    letter:NoteLetter;
    octave:Octave;
    duration:Duration;
    volume = 1;
    rampDownMargin = 1/8;
    rampUpMargin = 1/10;

    constructor(letter:NoteLetter,octave:Octave,duration:Duration) {
        this.letter = letter;
        this.octave = octave;
        this.duration = duration;
    }
}

A voice will be a collection of notes. An array would be sufficient. But I want to also be able to expand upon the voice class to support some other features. For example, I’ve added a member named isMuted so that I can silence a voice without deleting it. I may also add methods to serialize or deserialize a voice or functionality to make editing easier.

class Voice { 
    noteList:Array;
    isMuted = false;

    constructor() { 
        this.noteList = [];
    }

    addNote(note:NoteLetter, octave:number|null = null, duration:number|null = null ) : Note {
        octave = octave || 4;
        duration = duration || 1;
        var n = new Note(note, octave, duration);
        this.add(n);
        return n;
    }

    add(n:Note):void {
        this.noteList.push(n);
    }

    parse(music:string):void {
        var noteStrings = music.split('|');
    }

}

Voices are combined together as a melody. In addition to collecting the voices within this class the functionality for rendering the voices to playable audio buffer also happens here.

class Melody { 

    voiceList:Array;

    constructor() {
        this.voiceList = [];
    }

    addVoice():Voice {
        var v = new Voice();
        this.add(v);
        return v;
    }

    add(v:Voice) { 
        this.voiceList.push(v);
    }

    render(audioBuffer:AudioBuffer, bpm:number) {
        var channelData = audioBuffer.getChannelData(0);
        var sampleRate = audioBuffer.sampleRate;
        var samplesPerBeat = (60/bpm)*sampleRate;
        this.voiceList.forEach(voice => {
            if(voice.isMuted)
                return;
            var position = 0;
            var x = 0;
            voice.noteList.forEach((note)=>{                

                var octaveMultiplier = Math.pow(2,note.octave );
                var frequency = noteFrequency[note.letter] * octaveMultiplier;
                var dx:number = (frequency == 0 )? 0 : (1 / sampleRate * frequency);                
                var sampleCount = samplesPerBeat * note.duration;
                var rampDownCount = samplesPerBeat * note.rampDownMargin;
                var rampUpCount = samplesPerBeat * note.rampUpMargin;
                var noteSample = 0;

                while (sampleCount>0 && position < channelData.length) {                     var rampAdjustment = 1;                     if(rampUpCount > 0) {
                        rampAdjustment *= Math.min(rampUpCount, noteSample)/rampUpCount;
                    }

                    if(rampDownCount>0) {
                        rampAdjustment *= Math.min(rampDownCount, sampleCount)/rampDownCount;
                    }

                    

                    channelData[position] += rampAdjustment * 0.25 * note.voume * Math.sin(x);
                    --sampleCount;
                    ++noteSample;
                    ++position;
                    x += dx;
                }
            });
        });
    }
}
 
To test this code out I’ve made a an audio buffer using the code from the earlier sample. I then created a Melody objected and added a couple of voices to it that place a scale in reverse order.
 
var song = new Melody();

var voice = song.addVoice();

voice.addNote('C', 4);
voice.addNote('D', 4);
voice.addNote('E', 4);
voice.addNote('F', 4);
voice.addNote('G', 4);
voice.addNote('A', 4);
voice.addNote('B', 4);
voice.addNote('C', 5, 2);

voice = song.addVoice();

voice.addNote('C', 5, 1);
voice.addNote('B', 4);
voice.addNote('A', 4);
voice.addNote('G', 4);
voice.addNote('F', 4);
voice.addNote('E', 4);
voice.addNote('D', 4);
voice.addNote('C', 4, 2);    

song.render(audioBuffer, 120 /*BPM*/);
var source = context.createBufferSource();
source.buffer = audioBufferList[0];
source.connect(context.destination)
source.start();
 
With this functionality present I feel comfortable trying to play out a recognizable tune. The first tune I tried was one from a video game that I think many know. I found the sheet music and typed out the notes.
 
var song = new Melody();

var voice = song.addVoice();
voice.isMuted = false;
voice.addNote('E', 6,0.5);
voice.addNote('E', 6, 0.5);
voice.addNote('R', 6, 0.5);
voice.addNote('E', 6, 0.5);
voice.addNote('R', 6, 0.5);
voice.addNote('C', 6, 0.5);
voice.addNote('E', 6, 0.5);
voice.addNote('R', 6, 0.5);
voice.addNote('G', 6, 1);
voice.addNote('R', 6);
voice.addNote('G', 5 , 0.5);
voice.addNote('R', 6,1.5);

//------------
voice.addNote('C', 5 , 0.5);
voice.addNote('R', 6,1);
voice.addNote('G', 5 , 0.5);
voice.addNote('R', 6,1);
voice.addNote('E', 5 , 0.5);
voice.addNote('R', 6,1);
voice.addNote('A', 6 , 0.5);
There are more notes to this (which are not typed out here). IF you want to hear this in action and see the code execute, I have it hosted here: https://j2i.net/apps/dynamicSound/
 
For the sake of trying something different, I also tried “Adagio for Strings and Organ in G-Minor“. If you want to hear the results of that, you can find those here
 
I got this project as far as rendering those two melodies before I turned my attention to C/C++. I preferred using C/C++ because I had more options for rendering the sound and the restrictions of the browser are not present. However, some of the features that I used were specific to Windows. There is a potential disadvantage (depending on how the code is to be used) of it not being runnable on some other OSes without adapting to their audio APIs.
 
This is something that I may revisit later. 
 
 
 

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Shaders on Chrome and Multi-GPU Systems

I have been working with graphic shaders on Chrome. There is a lot that you can do with Shaders alone. If you are running a recent version of Chrome, have a decent GPU in your computer, and want to see what you can do with them take a look at https://shadertoy.com for samples of the type of real time graphics that can be done with them. I will not talk about how shaders work here though. I want to talk about a performance problem I encountered and how I got around it.

I usually use a 27-inch iMac running Windows for day to day work.  The computer has a GPU that was made for mobile computers. Having been manufactured back in 2014 as you might guess it is a pretty weak GPU. To address some of the shader performance problems that I encountered I tried using en eGPU (external GPU). But I did not see the performance gains that I had expected. Shader performance was even worst when I tried running shaders with Chrome on the GTX 1080 that was in the external GPU.

What was going on? I decided to look at the Chromium source code to get an idea for this. Chrom(e|ium) uses a library called Angle for its low level graphics calls. Angle abstracts away the underlying graphics API so that someone can use the same source base for more than one type of device. On Windows machines the low level graphics APIs are generically referred to as DirectX. DirectX is a family of APIs with Direct3D being the set of DirectX APIs focused on 3D graphics. Angle supports Direct3D versions 9 and 11. Chrome uses Direct3D 9 though.

Looking in the Angle source code it did not take long to find the source of my problem. The lines of interest are in the constructor and in the initialize() method. In the Constructor the member mAdapter is set to a value that is used to select the graphics adapter to be used. It is set to D3DADAPTER_DEFAULT. This value usually resolves to the adapter that has the desktop that is marked as the primary adapter. In the initialize() method this value was passed to Direct3D9::CreateDevice.   In my case this was the built-in  AMD Radeon R9 M295X. The shaders were running on this card and the output was being copied over to the GTX 1080 for display. Once I knew this getting the problem resolved was easy. I set the GTX 1080 to the primary display adapter, logged out of my computer, and logged back in. After this performance was great!

It was still possible to get bad performance though. If I moved Chrome back to the built-in display it appears that there were performance penalties from the memory being copied from the GTX 1080 back to the built in adapter. On other machines the penalties might not be as severe.

What does my setup look like? I use the Sonnet eGPU cases (I have got two) and usually have an NVidia GPU in one and an AMD GPU in the other. The Apple computer that I am using does not have the USB-Thunderbolt 3 adapter interface that is used by these cases. I must use a Thunderbolt 3 (USB-C) to Thunderbolt 2 adapter to make this work.

sonnet

Creating Development Certificates for Samsung Tizen TVs

Whether you are developing for a consumer Samsung TV or for one of the commercial SSSP displays you’ll need to have a development certificate for your code to run. There is a difference in how the certificate is created for the commercial and consumer displays. But the process is similar off the same for both.

To get started you’ll need to already have Tizen Studio installed. Open the Tizen Studio package manager and make sure that you have the following components installed.

  • Samsung Certificate Extensions

If you don’t already have the component installed select it for installation. You’ll also need to have the SDK component installed for the version of Tizen that you are targeting (ex: “5.0 TV”).  Once the component is present start the Tizen Studio Device Manager.

The device manager will be used to get the device’s ID (DUID) for consumer TVs and for installing the development certificate onto the display. For these steps to work the TV must have development mode enabled and must be set to accept development requests from the same IP address as your development machine; it will refuse request from other addresses.  If you haven’t already enabled development mode I have another posts on how to do that
here
.

In the device manager there is an icon in the upper right corner of a phone connected a computer. Select this icon. It is for establishing connections to the device manager. In the window that opens you will see a list of devices that you’ve previously connect to. If the IP address of your display is there you can click on the icon of the on/off switch to reconnect to it. If the IP address of your display is not present click on the + icon to add it. When adding you can give the TV a descriptive name, enter the IP address, and the port on which to connect (usually 26101). Click on OK to return to the main Device Manager user interface and you should see your display connected. Right-click on the display and select DUID to see the ID of the display. Go ahead and copy it to the clipboard.  You will need it in later on.  If you have multiple displays for which you will develop  repeat the same steps to collect the DUID values for the other displays and save them to a text document.  Note that if you have both consumer and commercial displays that the DUIDs for them cannot be used  mixed with each other. You can perform the following steps for all of your consumer displays at once and then all of your commercial displays at once.

Open the Certificate Manager.  When it is opened for the first time you may be asked to select a location from which you want to import certificate profiles. Select Cancel here.  You will need to create both an Author certificate and a Distributor certificate. Click on the + icon in the upper right corner to start the process of creating a new certificate. What you select on the window that appears is dependent on the type of display for which you are developing.

TizenCertificateTypeSelection

Commercial (SSSP) Display Steps

For the commercial displays select “Tizen. ” In the next step you’ll be asked to enter a name for the certificate profile. If you develop for other device types (such as the mobile device, watch, or the consumer displays) you’ll need to have more than one certificate profile. It will be good for them to have easily identifiable names.  Enter a name here that let’s you know that this is a certificate for developing for a commercial display and select Next.

TizenEnteringCertificateProfileName

Next you must select an author certificate. If you’ve created an author certificate before you have the option to select it. If not then select the option to create a new one. I’ll assume that an author certificate has not been created yet. The minimal amount of information that you need for an author certificate is a name, a password for the certificate (don’t forget this password!). You can optionally enter your country code, State, City, Organization, department, and an e-mail address and a filename in which the key file for the certificate will be saved. Enter your options and select “Next”

TizenEnteringCertificateAuthorData

The last selection to make is whether you want to use the default Tizen distributer certificate . While this selection will allow you to submit mobile applications to the Tizen store it is fine for our purposes. Select it and click on “Finish.” With this you have a

TizenDistributerCertificateType

Consumer Display Steps

For the consumer displays when asked for the certificate type select “Samsung”.

TizenCertificateTypeSelection

On the next screen you’ll be asked for the device type. Select “TV.”

TizenDCertificateeviceType

Enter a name for the profile and select next.

TizenCertificateProfileName

Next you’ll select an author certificate. If you already have an author certificate that you’d like to use  you can select it here. If you would like to create a new certificate (which you would do if you’ve never created one before) select the first option. You would also select this option if you had a certificate but it has expired. If you had a certificate that has expired you may want to select the option to create a new certificate and check the box that says “Use an Existing Certificate.” If you have an application that has been published to the Tizen store before and are creating a new certificate then you’ll want to use this option since an application’s ID is in part based on the certificate with which it was signed.

TizenAuthorCertificateInformation

Enter the your author information. Remember what your password is, especially if you plan to publish your application under this certificate. When you click on “Next” you’ll be asked to sign into your Samsung account. After signing in your Author certificate is created.

You’ll be presented with the option of backing up your certificate. While this isn’t required it is strongly encouraged. You will want to keep this secure as it forms part of the identity for your apps. But you are almost done. You need a distributor certificate

TizenBackupCertificate

On the next screen you are prompted to either create a new distributor certificate or select an existing one. Choose the option to create a new one.

TizenNewDistributorCert

Now it is time to use the DUID that you copied earlier. If it is already on your clipboard it will automatically be pasted into one of the entries for DUID. You also have the option to change the privilege level, but not really. The two privileges available are “Public” and “Partner.” Partner gives you application to functionality that isn’t available to everyone. But to use Partner level privileges they have to be granted to you by Samsung.

TizenEnterDUID

After you click on “Next” you’ll be greeted with a confirmation that the certificate has been created along with the path to the certificate being shown.

TizenCertificateCreationComplete

For Both Consumer and Commercial

Now that your certificates have been created you need to let the display know about it so that it can recognize applications that were signed with your certificate and allow them to run. To do this return to the device manager. Right-click on the your display in the device manager and select “Permit to install apps.” The display is ready to accept applications now.

Switching Certificate Profiles

If you are developing for more than one type of Tizen device you’ll probably have to change which certificate profile that you are using as you change which platform you are working on. When you need to change profile open the certificate manager. You will see a list of the profiles that you’ve set up and a check-mark next to one marking it as the active profile. If you want to change which profile is active select it from the list and click on the check mark in the upper right corner.

With the certificate created and selected you can now move forward with deploying an application to the display. Start off with a hello world program just to see that it works.

51thAkzZ9BL._SL160_

Tizen Compatible TV

Case Sensitive File System on Windows

readme

As of Windows 10 build 1803 case sensitive files names can be enabled on Windows. This is a feature that I have found useful while working in an environment where some developers run macOS and others run Windows. Occasionally a macOS developer would encounter an error about a missing file that the Windows developers didn’t encounter. The problem was usually because of inconsistent casing being used in a file name and some reference to the file in code.  By enabling the case sensitive file system the Windows developers are able to better check when their casing is inconsistent.

The feature is enabled on a folder and its children. It isn’t something that you would want to try to enable system wide. I use this feature on project folders.

To enable the case sensitive file system open an administrative command prompt. Then use the following command.

fsutil.exe file SetCaseSensitiveInfo c:\PathTo\Target\Folder enable

After the command runs you can test it out by creating a few files in the folder that have the same name with different casing. If you ever want to turn off the case sensitive file system the command to do this is similar.

fsutil.exe file SetCaseSensitiveInfo c:\PathTo\Target\Folder disable

 

Introduction to Direct 2D: DirectWrite

In my previous installment of this series I was rebuilding a video game interface.  The interface resembled my target, but did not have any text.  How can I render text with Direct2D?  The answer is: DirectWrite.

What is DirectWrite?

DirectWrite is a GPU accelerated text rendering API that runs on top of Direct2D.  It originally shipped with Windows 7 and (as of the publishing date of this article) receives updates from Microsoft in the Windows 10 updates.  It is a great solution for rendering text within a DXGI based program.  Surfaces are generally dependent on the device.  DirectWrite resources are device independent.  There is no need to recreate them if the surface is lost.

Text Format

There is no default text style.  When text is being rendered it must have a definite font, style, size, and weight.  All of these text attributes are packaged in IDWriteTextFormat.  If you have worked with CSS think of IDWriteTextFormat as being a style for text.  In this interface you select the font face, weight, size, and text alignment.

TOF(_pWriteFactory->CreateTextFormat(
	TEXT("Segoe UI"),	//font family
	NULL,				//font collection
	DWRITE_FONT_WEIGHT_EXTRA_BOLD,
	DWRITE_FONT_STYLE_NORMAL,
	DWRITE_FONT_STRETCH_NORMAL,
	40.0f,
	TEXT("en-us"),
	&_pDefaultFont
));

Drawing Text

I will discuss two ways to render a text string: DrawText and DrawTextLayout.  The easier method of rendering a text string is to use ID2D1RenderTarget::DrawText.  This method accepts a text format object and a bounding rectangle.  It renders the text string at the location specified by the bounding rectangle.  It also accepts optional arguments that affect text layout and metrics.  This is the easiest of the two methods for rendering text, but it does not support having mixed styles within the same text string.  For that you would need to use a text block rendered in IDWriteTextLayout with ID2D1RenderTarget::DrawTextLayout (discussed in the next section).  Text rendered with this method will appear center-aligned to the bounding rectangle in which it is placed.

std::wstring test = TEXT("TEST");
D2D1_RECT_F textLocation{20,200,800,800};
_pRenderTarget->DrawTextW(
	test.c_str(), test.size(), //The text string and length
	_pDefaultFont.Get(),  //The font and style description
        &textLocation,  //The location in which to render the text
	_pBackgroundBrush.Get(),  //The brush to use on the text
	D2D1_DRAW_TEXT_OPTIONS_NONE, 
	DWRITE_MEASURING_MODE_NATURAL
);

Text Layout

A text layout serves a similar function as a text block in that it is used for showing text.  The primary difference is that a text block has a one-to-one relationship with what is rendered on the screen and a text layout (through the IDWriteTextLayout interface) can be rendered several times.  If there were labels that were used repeatedly with the same text, they could be implemented by creating an IDWriteTextLayout object and rendering it several times.

//In InitDeviceIndependentResources()

std::wstring  stringResult = L"RESULT";
TOF(_pWriteFactory->CreateTextLayout(
	stringResult.c_str(),
	stringResult.size(),
	_pDefaultFont.Get(),
	400, 90, 
	&_pTextLayout
));

//In OnRender()
_pRenderTarget->DrawTextLayout({ 00,0 }, _pTextLayout.Get(), _pBackgroundBrush.Get());

ID2D1AppWindow_DWrite_TextLayout
The resulting text from the above code

Unlike text rendered with ID2D1RenderTarget::DrawText, text rendered with IDWriteTextLayout can have styling applied to specific ranges of letters within the text.  If you need to mix text styles, use IDWriteTextLayout instead of using ID2D1RenderTarget::DrawText.  Create a TextLayout initially using a text format that covers the majority of the text in your layout.  Where deviations to your selected default should apply, create a DWRITE_TEXT_RANGE instance.  DWRITE_TEXT_RANGE contains the index of the starting character for the new text styling and the number of characters to which it should be applied.  Here are some of the functions for adjusting text styling.

When creating the IDWriteTextLayout a width and a height for the text area are needed.  Content rendered within this rectangle will automatically be wrapped.

std::wstring  stringResult = L"RESULT";
TOF(_pWriteFactory->CreateTextLayout(
	stringResult.c_str(),
	stringResult.size(),
	_pDefaultFont.Get(),
	400, 90, 
	&_pTextLayout
));
DWRITE_TEXT_RANGE range{ 2,3 };
_pTextLayout->SetFontWeight(DWRITE_FONT_WEIGHT_EXTRA_LIGHT, range);

ID2D1AppWindow_DWrite_TextLayout_range

Applying DirectWrite to the Project Interface

Where I left off, I had used Direct2D to render color to areas in which the interface will show information.

D2DAppWindow_sonicInterface

The next step is to apply text to the interface.  This interface will ultimately be used for performing batch processing of some media files.  The details and implementation of that processing are not shown here and are not important since this is only showing an interface.  The text populating the interface in this example is for demonstrative purposes only.

As shown in creating the interface, I am making a list of shape types to render.  In the call to OnRender() there is a case statement that will make the necessary calls depending on the next shape type in the list.  Between the last post and this one I changed my shape representation to have a shape base class and subclasses instead of having a single struct with the additional data needed packaged in unioned elements.

I am using  IDWriteTextLayout to render text instead of ID2D1RenderTarget::DrawText. Since DrawText centers the text within the bounding rectangle I could not get the layout to be what I wanted. Using IDwriteTextLayout in combination with layout options I was able to achieve the layout that I was looking for.

With the development of a hierarchy for the types of items that I am rendering, I am almost tempted at this point to just start defining a control hierarchy.  While that would have utility it may also distract from the concepts being shown here.  A control hierarchy may be written some other day.  For now the hierarchy I am using for shapes is shown is this code.

interface  IDispose {
	virtual void Dispose() = 0;
};

struct Shape: public IDispose {
	Shape() {}

	Shape(ShapeType shapeType, PaletteIndex paletteIndex)
	{
		this->shapeType = shapeType;
		this->paletteIndex = paletteIndex;
	}
	virtual void Dispose()
	{

	}
	std::wstring tag;
	ShapeType shapeType;
	PaletteIndex paletteIndex;
};

struct TextShape :public Shape {
	TextShape() {}
	TextShape(std::wstring text, D2D1_RECT_F location, TextStyle textStyle = TextStyle_Label, PaletteIndex palette = PaletteIndex_Primary, 
		DWRITE_TEXT_ALIGNMENT textAlignment = DWRITE_TEXT_ALIGNMENT_LEADING, 
		DWRITE_PARAGRAPH_ALIGNMENT paragraphAlignment = DWRITE_PARAGRAPH_ALIGNMENT_CENTER)
		:Shape(ShapeType_Text, palette)
	{
		this->text = text;
		this->location = location;
		this->textStyle = textStyle;
		this->paragraphAlignment = paragraphAlignment;
		this->textAlignment = textAlignment;
	}

	void Dispose() override 
	{
		this->textLayout = nullptr;
	}
	std::wstring text;
	D2D1_RECT_F location;
	TextStyle textStyle;
	ComPtr textLayout;
	DWRITE_PARAGRAPH_ALIGNMENT paragraphAlignment;
	DWRITE_TEXT_ALIGNMENT textAlignment;

};

struct RectangleShape : public Shape {
	RectangleShape() :Shape(ShapeType_Rectangle, PaletteIndex_Primary) {

	}
	RectangleShape(D2D1_RECT_F rect, PaletteIndex p) :
		Shape(ShapeType_Rectangle, p)
	{
		this->rect = rect;
	}
	D2D1_RECT_F rect;
};

struct EllipseShape : public Shape {
	EllipseShape():Shape(ShapeType_Ellipse, PaletteIndex_Primary)
	{}
	EllipseShape(D2D1_ELLIPSE ellipse, PaletteIndex p = PaletteIndex_Primary)
		:Shape(ShapeType_Ellipse, p)
	{
		this->ellipse = ellipse;
	}

	D2D1_ELLIPSE ellipse;
	
};

The OnRender() method has now been modified to reflect the new struct hierarchy and now has additional code for rendering text.

switch ((*current)->shapeType)
{
	case ShapeType::ShapeType_Rectangle: 
	{
		std::shared_ptr r = std::static_pointer_cast(*current);
		_pRenderTarget->FillRectangle(r->rect, brush.Get());
		break;
	}
	case ShapeType_Ellipse: 
	{
		std::shared_ptr e = std::static_pointer_cast(*current);
		_pRenderTarget->FillEllipse(e->ellipse, brush.Get()); 
		break;
	}
	case ShapeType_Text:
	{
		std::shared_ptr t = std::static_pointer_cast(*current);
		if (t->textLayout)
		{
			ComPtr format = _pDefaultFont;
			//_pRenderTarget->DrawTextW(t->text.c_str(), t->text.size(), format.Get(), t->location, brush.Get());
			D2D1_POINT_2F p = { t->location.left, t->location.top };
			_pRenderTarget->DrawTextLayout(p, t->textLayout.Get(), brush.Get());
		}
		break;
	}
	default:
		break;
}

And now the UI looks more complete.

D2D1_UI_With_Text

There are adjustments to be made with the margins on the text and positioning.  Those are going to be ignored until other items are in place.  One could spend hours making adjustments to positioning.  I am holding off on small adjustments until the much larger items are complete.

The complete source code can be found in the following commit on Github.

https://github.com/j2inet/CppAppBase/tree/5189359d50e7cdaee79194436563b45ccd30a88e

There are a few places in the UI where I would like to have vertically rendered text.  A 10 to 15 degree tilt has yet to be applied to the entire UI.  I would also like to be able to display preview images from the items being processed.  My next posts in this series will address: Displaying Images in Direct2D and Applying Transformations in Direct2D.

twitterLogofacebookLogo   youtubeLogo

Introduction to Direct2D:Part 1

DirectX is a family of APIs that provide functionality related to multimedia. Some of the APIs are focused on sound, some on input, and some on graphics. Today I want to present on of the graphical APIS based on the Direct X Graphics Infrastructure (DXGI) known as Direct2D. The Direct2D API is for rendering graphics in 2D (as suggested by the name). The API takes advantage of the features in the graphics card for accelerating the rendering.

When working with Direct2D you don’t create the objects directly. Instead you will use a factory method that creates the objects and returns interfaces to the object or you will use those returned interfaces to requested additional objects. The interfaces returned by the Direct2D APIs are COM pointers. COM, or Component Object Model is an interface for making components that interact with each other. The standard has existed since 1993. Despite being 30 years old it is important to Windows and used more than one might think within .NET.  There are books that cover how COM works. Some of them are old, but don’t let that make you think that the book’s information isn’t relevant. A full discussion of COM is outside of the scope of this post.

To get an initialized window with which to work I’ll be using the sample Win32/C++ code that I presented in the post. See that post for more information about how the base window works. In my derived class I’ll begin initializing Direct2D specific objects.

Adding Update and Render Methods

The AppWindow sample implements a message pump that uses PeekMessage instead of GetMessage. This allows the application to do other things instead of halting when there are no messages to be processed. When there are no messages a method named Idle() is called. This method will be repurposed for adding an update/render loop.

There will be 4 methods added: Update(), OnUpdate(), Render(), and OnRender(). Both OnUpdate() and OnReader()  are virtual; these methods contain code that could be completely replaced or overridden if this code were re-purposed for another application.  The other two methods, Update and Render, contain code that needs to be called on every update cycle and rendering cycle that I would not want affected when derived applications are overriding the rendering or the updating. The Update method also is querying the performance timer to calculate the amount of time that has passed since the last update. This information could be used to know how far to move an object in a scene on the next frame update or to calculate the FPS.

D2D1AppWindow_Idle_cpp

Initializing D2D

You’ll see two variations of the D2D Interfaces. One version is prefixed with D2D1 (note the numeral 1 at the end). The D2D1 interface was released with Windows 8 and is the one that I’ll be using here.  There’s two interfaces that we need to initialized. ID2D1Factory, which will be the object from which we directly or indirectly make the other D2D1 objects, and ID2D1HwndRenderTarget, which is an interface that we use for painting our scene onto the window object created by the application base.  As we create objects something to keep in mind is two classifications into which DXGI objects can be classified (this isn’t limited to Direct2D, but applicable to other DirectX graphical APIs). An object could be a Device Dependent resource or a Device Independent Resource. What’s the difference?

Device Dependent and Device Independent Resources

Device in this context refers to the video rendering device. This will usually be the video card but in some scenarios could be a purse software device or even refer to a resource that is hosted on another computer. To keep things simple let’s just view Device as being synonymous with video card or video adapter. Some resources allocate resources on the device (video adapter). But these resources could be lost at any moment if the state of the device changes (changing the resolution of the device will do this) or if the device gets disconnected (yes, a video adapter could be disconnected while the program is still running). If this occurs then none of the device dependent resources is valid.

Device Independent Resources do not allocate resources on the video card. These resources use RAM allocated to the CPU and are not affected by changes int he display configuration.

When a device is lost all of the device dependent resources must be released and recreated. In some programs a valid and simple way to do this is to simply terminate the program and start a new instance of the program. While an easy solution the program will loose any context of what the user was doing unless the program also saves state and restores this context. For scenarios in which I’ve used this solution the access to the computer hardware was restricted making the loss of a device an extremely low frequency event (something that might happen once a day at 3am in the morning while automated maintenance is doing something to the computer).

Examples of objects that are device independent resources include the ID2D1Factory object that is used for creating the other objects, Geometry objects used for describing shapes (objects that implement the ID2D1Geometry interface), and stroke style objects (ID2D1StrokeStyle).

Examples of Device Dependent Resources include ID2D1Device (which refer to the device that itself was lost), brush resources (implement IBrush), and ID2D1Layer (for creating rendering layers).

Object Creation

When initializing my D2D objects I have two methods in which the initialization occurs. The method InitDeviceResources() initialized the deveice dependent resources. InitDeviceIndependentResources()  creates the other resources. During a program’s lifecycle InitDeviceIndependentResources() will only be called once. InitDeviceResources() will be called at least once but could be called many other times.  To ensure that these resources are released there’s an additional method, DiscardDeviceResources(), that releases all of the device dependent resources. Since the ComPtr type is being used setting the pointer to nullptr will result in the resources being released.

D2DWindow_Init

The call to AppWindow::Init() results in the creation of the window. The initial implementations for InitDeviceIndependentResources() and InitDeviceResources() will create the ID2D1Factory object and create a RenderTarget for rendering to the screen.

d2dAppWindow_InitDeviceResources_cpp

The InitDeviceIndependentResources will create the D2D1Factory(). The other resources will be created in InitDeviceResources() and cleared in DiscardResources();

Loosing the Device

During the render phase of the application the various calls made result in commands being queued to be executed on the video card to render our scene.  The commands are not executed until we end our drawing (which actually ends adding commands to the queue). If the connection to the device were lost when we end our drawing the function call to EndDraw() returns a value that can be examined to detect that the device has disconnected.  If the returned value is D2DERR_RECREATE_TARGET the resources must be reinitialized.

d2dAppWindowDeviceLost_cpp

Rendering

In DXGI applications  drawing is performed using render target objects. There are several types of render targets. We will be using an ID2D1RenderTarget in this sample application.  We have a render target that renders to the application window. Render targets could also render to off screen surfaces, textures, or remote machines.

In the previous code sample a render target was already being used to clear the application window and fill it with the color blue. The general usage pattern will look like the following.

ID2D1RenderTarget* myRenderTarget;
myRenderTarget->BeginDraw();
//call draw functions here
myRenderTarget->EndDraw();

Let’s draw a square. There are three things that we need; a description of the square’s geometry, a brush to draw the square with, and a color to assign to the brush. If you’ve used Win32’s GDI (Graphical Device Interface) you’ve seen the term “Brush” used before in a similar sense. A Brush is a system resource that contains information on what color to use when drawing something. A brush could be for rendering a solid color or it could be used to use portions of a bitmap to fill a shape.

The struct D2D1_RECT_F is used to describe the square here (also available, D2D1_RECT_U and D2D1_RECT_L). This structure contains the elements left, top, right, and bottom. Any D2D1_RECT_F instances that are created reside within the CPU’s memory.  The color to be used with the brush is also assigned in a struct that only uses the CPUs memory. The type D2D1_COLOR_F  is used here, with the values red, green, blue, and alpha of type float.

For the brush there are GPU resources that will be consumed. It is a device dependent resource. We request that a brush be created and receive a pointer to its interface by calling ID2D1RenderTarget::CreateSolidColorBrush. The method arguments are the color that the brush should be and a pointer to the object that will receive the interface address.

d2d1AppWindow_CreateResources

To render the square the OnRender() method requires one or two additional calls. When rendering a geometry there are two types of methods available. One type will outline the geometry and the other is to fill the shape (which results in a solid shape). To render a geometry that has both a solid fill and an outline first use the fill method and then the draw method as I do below.

D2D1AppWindow_DrawSquare

There are a number of other geometries that ID2D1RenderTarget supports; Ellipses, rectangles, rounded rectangles, text, and other constructed geometries.

Resizing the Window

The square renders, but if you resize the window you’ll see a problem. As the window is shrunk on one axis or the other the square will shrink on that axis; the square will become a rectangle with uneven sides. Fixing this is easy enough. The RenderTarget must be updated to reflect the new window size. Since this sample application is derived from the CppAppBase that I presented in a previous post there exist a resize method that can be overridden to to update the render target. ID2D1RenderTarget::Resize accepts a D2D1_SIZE_U struct and updates the render target size accordingly.

void D2D1AppWindow::OnResize(UINT width, UINT height)
{
	this->_size = D2D1_SIZE_U{ width, height };
	if (_pRenderTarget)
	{
		_pRenderTarget->Resize(_size);
	}
}

There is still an annoying behaviour. While the window is being resize the screen is blank. Part of the reason for this is that the rendering only occurs when there are no messages to process and resizing generates messages. One of the messages generated while resizing is a WM_PAINT message; this is telling the application to redraw itself. If we call OnRender() in response to a WM_PAINT message this will handle the screen updates while resizing. The AppWindow base class has an OnPaint() method that is called when a WM_PAINT message comes through. Overriding this method and adding a call to OnRender() results in a window that doesn’t go blank when resizing.

void D2D1AppWindow::OnPaint()
{
	this->OnRender();
}

Rendering a Shape List

One could write code to individually render each one of the primitives needed to make up a more complex scene. For applications with complex scenes doing this is especially undesirable. Instead a better solution is have the primitives that must be called encoded within a file. If a change needs to be made to a scene the change would be made with the file instead of in code.

For the next sample I”m going to move halfway to such a solution. I’m making a list of primitives to be rendered and am populating the list from code. With a bit more work the data could be externalized. I’m using the list to start rebuilding an interface. I was playing a video game on my Nintendo Switch. The game had a screen that displayed between levels that shows how well someone did. I thought the interface was both simple and æsthetically pleasing.

ES4msOWUcAAzhAP

Creating something similar to this is my goal. It is composed of several rectangles, a few concentric circles, and text. Were the interface all the same type of geometry elements this could be defined with a simple list of attributes about the rectangles. Since there is more than one type of geometry the list elements must identify a geometry type. I’ve made a struct that has a field that identifies the shape type (Rectangle or Ellipse for now),  identifies which of the colors in my 3 color palette that the item should be, and has the information for the shape itself. The rectangle data and ellipse data are in a union to minimize the amount of memory used for each item.

enum   PaletteIndex {
	Primary = 0,
	Secondary = 1,
	Background = 2
};

enum ShapeType {
	ShapeType_Rectangle = 0, 
	ShapeType_Ellipse = 1
};

struct ColoredShape
{
	static ColoredShape MakeEllipse(D2D1_ELLIPSE e, PaletteIndex p)
	{
		ColoredShape c{ 0 };
		c.shapeType = ShapeType::ShapeType_Ellipse;
		c.ellipse = e;
		c.paletteIndex = p;
		return c;
	}

	static ColoredShape MakeRectangle(D2D1_RECT_F r, PaletteIndex p)
	{
		ColoredShape c{ 0 };
		c.shapeType = ShapeType::ShapeType_Rectangle;
		c.rect = r;
		c.paletteIndex = p;
		return c;
	}

	union  {
		D2D1_ELLIPSE ellipse;
		D2D1_RECT_F rect;
	};
	ShapeType shapeType;
	PaletteIndex paletteIndex;
};

The factory methods make it easier to initialize instances of the value. I’ve got a vector that holds these structs. It is populated with the data for each rectangle and ellipse to be rendered. The items will be rendered in the same order that they show in the list. If two items overlap the latter of the items to be rendered will be on top.

void D2D1AppWindow::InitDeviceIndependentResources()
{
	TOF(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, IID_PPV_ARGS(&_pD2D1Factory)));
	_primaryColor = D2D1::ColorF(0.7843F, 0.0f, 0.0f, 1.0f);
	_secondaryColor = D2D1::ColorF(0.09411, 0.09411, 0.08593, 1.0);
	_backgroundColor = D2D1::ColorF(0.91014, 0.847655, 0.75f, 1.0);
	_mySquare = { 20, 20, 30, 30 };

	
	_shapeList.push_back(ColoredShape::MakeRectangle({0,0,456,104 }, Primary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,128,456,550 }, Primary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 488,0,508,110 }, Secondary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 488,40,900,115 }, Secondary ));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,130,850,180 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,130,850,180 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,190,850,240 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,190,850,240 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,250,850,300 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,250,850,300 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,350,600,450 }, Secondary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,350,600,450 }, Secondary));

	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,60, 60 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,57, 57 }, Background));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,55, 55 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,52, 52 }, Background));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 476,470,870, 550 }, Secondary));
}

In the OnRender() method this list is iterated and each element is drawn.

void D2D1AppWindow::OnRender()
{
	HRESULT hr;
	if (!_pRenderTarget)
		return;
	_pRenderTarget->BeginDraw();
	_pRenderTarget->Clear(_backgroundColor);
	_pRenderTarget->FillRectangle(&_mySquare, _pPrimaryBrush.Get());
	_pRenderTarget->DrawRectangle(&_mySquare, _pSecondaryBrush.Get());

	for (auto current = _shapeList.begin(); current != _shapeList.end(); ++current)
	{
		ComPtr brush;
		switch (current->paletteIndex)
		{
		case Primary: brush = _pPrimaryBrush; break;
		case Secondary: brush = _pSecondaryBrush; break;
		case Background: brush = _pBackgroundBrush; break;
		default: brush = _pPrimaryBrush;
		}
		switch (current->shapeType)
		{
		case ShapeType::ShapeType_Rectangle:_pRenderTarget->FillRectangle(&current->rect, brush.Get()); break;
		case ShapeType::ShapeType_Ellipse:_pRenderTarget->FillEllipse(&current->ellipse, brush.Get()); break;
		}
		
	}
	
	hr = _pRenderTarget->EndDraw();
	if (hr == D2DERR_RECREATE_TARGET)
	{
		// The surface has been lost. We need to recreate our
		// device dependent resources. 
		this->DiscardDeviceResources();
		this->InitDeviceResources();
	}
}

Running the program gives me something that looks like it was inspired by the video game interface.

D2DAppWindow_sonicInterface

There are similarities, but there are also plenty of differences. Aside from the missing text in the video game interface the text is rotated slightly. With the way rectangles are defined now there’s no apparent way to render a rectangle that isn’t aligned with the X or Y axis. To move forward we need to know how we can apply transformations to the rendering and how to render text. I’ll continue there in the next part of this series.

The complete source code for the program can be found on github.

https://github.com/j2inet/CppAppBase

 

Win32/C++ Application Base

Win32/C=++

Win32 is necessary for development in many current technologies.  Some of my current projects utilize Win32/C++ applications.  Many of them use a similar pattern for starting the application.  Initializing the window that hosts these applications is not particularly interesting, but it is a foundation block for some of my applications that I will share in the future.

In Win32 UI programs there will be at least one function defined that is known as being of type WNDPROC.  A WNDPROC is a callback function.  It is the primary function through which Windows will communicate to an application various events.  When a user moves their mouse over an application WNDPROC is called.  When the user presses a key WNDPROC is called.  When an application is being notified to render its UI again WNDPROC is called.  The call signature for WNDPROC is shown below.

LRESULT CALLBACK WindowProc(
  _In_ HWND   hwnd,
  _In_ UINT   uMsg,
  _In_ WPARAM wParam,
  _In_ LPARAM lParam
);

The first argument in this function is a handle to the window that is the intended recipient of the message.  An application could have more than one window.  But an application with only one window will have the same value.  The second argument is a numerical value that identifies the event or message being sent.  The constants that define possible vales generally are prefixed by WM_ (meaning Windows Message).  Examples of such values include: WM_COMMAND, which is generally the result of a button being clicked; WM_PAINT, which tells the application to redraw its UI; WM_SIZE, meaning that the window has been resized; and WM_CLOSE, meaning that there was a request to close the window.

Because of the central position that the WNDPROC function plays in receiving notifications from the operating system, if all responses to the operating system were handled here the function would quickly grow big.  I have seen this done, and it can get to be a nightmare on shared projects.  One way to deal with this is to have the WNDPROC function act only as a method that routes these incoming messages to other functions that handle the method.  A simple way to implement this is with a switch statement where each case does a minimal amount of work before passing the message on to a function specifically made to handle the message.

That is the solution used in the projects I plan to share in the future.  I created an abstract base class named AppWindow that creates an application window; has a few methods for handling certain Windows messages; and has case statements for calling those methods in response to a Windows message.

Windows communicates these messages to an application through a message queue.  An application must retrieve the next message to begin processing it.  If an application does not retrieve any messages for a certain period of time, then Windows assumes the application is locked up or busy and may give you a prompt to either terminate or wait for the program.  To keep messages going through the queue an application must implement a message pump.  In a simple implementation of a message pump two functions are called in a loop.  GetMessage which receives a MSG struct containing the information on a message and DispatchMessage to have the message handed off to the application’s WNDPROC for processing. 

If there are no messages available, calling GetMessage will result in the application waiting until there is a message.  During this wait, the main thread of an application will not use any CPU bandwidth.  In some applications, such as games or other applications that are continuously performing UI updates, using GetMessage is undesirable since it can cause the UI thread to wait.  In such applications, PeekMessage can be used instead. With PeekMessage, unlike GetMessage, if there are no messages available PeekMessage will not block.  Its return value indicates whether or not a new message was available.  If there are no messages to be processed the application can perform other tasks.

An application may modify a message within the time that it was retrieved with PeekMessage or GetMessage and when it is passed off through DispatchMessage.  You will see the use of a method named TranslateMessage which despite its name, does not perform the modification of any messages.  Instead, it creates new messages in response to certain virtual key messages and adds these to the message queue.  For the programs that I present, the specifics of what it does are not important and I will not be modifying any messages in the message pump.

I do not use any of the traditional Win32 UI elements in the programs that I will be sharing (buttons, labels, text boxes, list boxes, etc.) but those elements could easily be created within the application.  If I were making a framework for such controls, I would probably make classes for each one.  My base class does have a CreateButton and CreateLabel method that are used for debug purposes.

AppWindow_header
The AppWindow.h file

This class is abstract and cannot be directly initialized.  So, there first must be a derived class.  The only methods that the derived class must absolutely implement are the constructor and the method GetWindowsClassName().  The constructor must accept a HINSTANCE argument and pass it to the AppWindow base class constructor. GetWindowClassName() must return a unique string.  This string is going to be used for registering the window class.

DerivedWindow_h
Full implementation of a derived window

To make an application that runs, all that is left to do is create a derived application window, call its Init() method (which is where it actually creates its Window) and call its message pump.

derivedWindow_main_cpp

Running the application will result in a window showing a label and a button.  It does not do anything yet, which is what is desired.  The code samples to be posted in the future, require that there be an initialized window (which this provides).  The entirety of the code can be found on GitHub.

Relevant Github Commit

Latest Version

HRESULT and COM Exceptions

I have some posts queued that make use of the C++ exception classes that have been useful in my previous projects.  These classes were primarily used for logging. COM calls which return an HRESULT indicating the success or failure of the call.  These return values should be examined and action should be taken if there is a failure.  What type of action to take will depend on the scenario.  In some cases an application may fallback on alternative functionality.  In other cases the failure may be irrecoverable.  The worst thing to do is nothing at all.  An early failure could result in a problem that is not realized until later and require a painful debugging experience.

For the sample code that I share here, I will not implement full error handling to avoid distraction.  But to avoid undetected failures, I do not want to leave the values untouched.  In the sample code I will use an inline function named TOF (Throw On Failure).  TOF will throw an exception if there is a failed return value.  In most of the sample code presented here, the exception will not be caught.  When the exception is not caught, the program is designed to terminate.  The TOF function itself will return the HRESULT if successful.  The HRESULT is still available for further examination if needed.

The exception classes and the TOF function can be found on Github.

https://github.com/j2inet/CppAppBase/blob/master/src/common/Exception.h


 

TIP:Multi-GPU WebGL Performance

There’s a large screen iMac that I use for development a lot. It has a 5K screen. The advantage of this is that when I am working on a project that runs at 4K resolution I can develop it on this machine and still have some pixels left over for some dev tools.  But a problem with the machine is that it never had a great GPU. It pushes a lot of pixels but Apple had used a mobile GPU (which are usually lower powered). It’s also an older machine. The solution for this: an external GPU. It took a bit of fanagling but I got an Nvidia GTX 1080 working on the machine. When I tried testing out some WebGL shaders on the computer at full resolution the performance was awful! What was going on?

I thought that the performance was indicating that the external GPU wasn’t being used. I thought that starting Chrome on the secondary (external) screen would solve this problem. It wouldn’t. Thankfully the Windows 10 Task Manager also shows GPU usage. I found that no matter where I placed the window that the internal (weak) GPU was the one that was being used.

My hypothesis on what was going on here is that Chrome was creating surfaces on the primary GPU. To test this I changed the primary screen to be one of the screens attached to the external GPU. The results were promising. I got much better performance. But there was still a significant difference if I ran full screen on the built in screen compared to the external one. The Task Manager showed that the Windows Desktop Manager was using a lot more cycles when I ran the Chrome window on the internal screen. Basically the system was rendering using the external GPU and then copying the screen to the internal GPU for each cycle.  For my purposes this needs no solution. I connected the eGPU to a 4K display and did my testing from it.

If you’ve got multiple GPUs and wish for WebGL to run on a specific screen you’ll for now have to set the desired GPU to be your primary one.

 

egpu

Sonnet eGPU Enclosure

Changing the Default Tizen 5.0 Project for Samsung TVs

Tizen-Pinwheel-On-Light-RGB

When using Tizen Studio if you start a project from one of the templates for a TV you may find that the project won’t deploy to a Samsung Consumer TV. There are a couple of changes that can be made to take care of this.

One is to edit the config.xml. There are a couple of lines in it to be changed. There is an element named tizen:profile with a name attribute of “tv”. Change this to “tv-samsung”. The other is in a file that isn’t listed by the IDE named “.tproject”.  Under the Platform element is a text value of “tv-5.0”. Change this to “tv-samsung-5.0”. I’ve found that even on a TV running Tizen 4.0 that these changes are sufficient to work. Just don’t use any Tizen 5.0 features on a display that is running an older OS.

Related: Developing for older Samsung TVs

 

 

Using WITS for Samsung/Tizen TV Development

 

One of the development scenarios that makes me cringe is an environment in which the steps and time from changing a line of code to seeing its effect is high. This usually happens in an environment with specialized hardware, limited licenses, or sensitive configurations leading to the development machine (as in the machine on which code is edited) is not suitable or capable of running the code that has been written.  There is sometimes some aspect of this in cross platform development. While emulators are often helpful in reducing this, they are not always a suitable solution since emulators don’t emulate 100% of the target platform’s functionality.

When developing for TVs running Tizen(which will be more than just Samsung TVs) Samsung has made available a tool to reduce the cycles from changing code to seeing it run through a tool called WITS.

Setting up WITS

To Setup WITS first you need to have already installed and configured Tizen Studio and Node. The system’s PATH variable must also include the path to tizen-studio/tools and tizen-studio/tools/ide/bin (you’ll need to complete those paths according to the location at which you’ve installed Tizen Studio).  You’ll also need to already have a certificate profile defined for your TV.

The files that you need for WITS are hosted on git. Clone the files onto your machine.

git clone https://github.com/Samsung/Wits.git

Enter the Wits folder and install the node dependencies

cd Wits
npm install

Next the folder there is a file named profileInfo.json. The contents of the file must be updated to point to your profiles.xml for your certificate and the name of the certificate profile to use. Windows users, note that when ever you enter a path for Wits you will need to use forward slashes (/), not back slashes (\).  For my installation the updated file looks like the following.

{
  "name": "TizenTVTest2",
  "path": "C:/shares/sdks/tizen/tizen-studio-data/profile/profiles.xml"
}

 

Configuring Wits to Use Your Project

Wits needs to know the location of your project. Open connectionInfo.json. There is an array element named baseAppPaths. Enter the path to your Tizen application here.  If you would like to make things convinent within this file also set the “ip” element to the IP address of the TV you are targeting. This isn’t necessary since you will be prompted for it when running a program. But it will default to the value that you enter here.

Running your Project

From the command prompt while in the Wits directory use npm to start the project

npm start

You will be prompted for a number of items. The default values for these items comes from the connectionInfo.json file that you modified in the previous section. You should be able to press enter without changing the values of any of these elements.

PS C:\shares\projects\j2inet\witsTest\Wits> npm start

> Wits@1.0.0 start C:\shares\projects\j2inet\witsTest\Wits
> node app.js

Start Wits............
? Input your Application Path : C:/shares/projects/j2inet/MastercardController/workspace/SystemInfo2
? Input your Application width (1920 or 1280) : 1920
? Input your TV Ip address(If using Emulator, input 0.0.0.0) : 10.11.86.62
? Input your port number : 8498
? Do you want to launch with chrome DevTools? : Yes

 

A few moments later you’ll see your project running on the TV.

Deploying File Changes

This is where Wits is extremely convenient. If you make a change to a file the application will automatically update on the TV. There’s nothing you need to do. Wits will watch the project for changes and react to them automatically!

Connecting Windows 10 IoT Core to a Hidden Network

For some odd reason while Windows 10 IoT core has the capability to connect it to hidden networks it doesn’t expose this capability in its UI. Given it’s target audience I to some degree can understand it not having some of the same features to guide a user through getting connected to a hidden network while at the same time seeing this as an inconvenience.

Isn’t It Easier to Unhide the Network

No, at least not when you have no control over the network. There’s an argument to be made on why hiding a network is not an effective security action. Whether those arguments fail or make great points is irrelevant in environments where you personally have no control or influence on the network.

There Are Several Ways to Connect. Which Should I Use?

I found a few solutions to this problem. But I’m only presenting the one that I found to be satisficing.  The method requires that the IoT device be first connected to a wired network first.

On a computer (as in your laptop or desktop) that already has a connection to the wireless network export the wireless profile. Copy this to the the Windows 10 IoT device and the import the profile. Let’s talk about how to do each one of those steps.

Exporting Your Wireless Profile

On your computer that has a connection to the wireless network open a powershell instance.  use the following command to export your wireless profile.

netsh wlan export profile name=

Here substitute the name of your wireless profile in for the last parameter (without the brackets).  This will be the same name that shows up in the Windows Network settings for the network that you are connected to.  When you press enter netsh will create an XML file with the wireless profile. Take note of the location where it was saved.

Copying the Profile to the Windows 10 IoT Device

One of the convinent things about Windows 10 IoT core is it has many of the behaviours that the Windows Desktop has. This includes the ability to read and write from the file system over the network. Connect your Windows 10 IoT device to a wired network and take note of the IP address that is assigned to it. In the Windows File Explorer on your desktop enter the following

\\\c$

You will be prompted to enter the username/password of the machine. The user name is Administrator. The password in the past has defauled to p@ssw0rd. But you might have specified a different password at setup. Once authenticated you’ll see the file system for the device. Copy that XML file over.

Importing the Wireless Profile

Open a Powershell instance to the Raspberry Pi. The easiest way to do this is to use the Windows 10 IoT Dashboard. Under “My devices” you should see your device listed. Right-click on it and select “Launch PowerShell”.

IoTLaunchPowershell

Once in PowerShell navigate to the directory in which you saved the XML profile. Use netsh to import it.

netsh add profile filename=

After entering this command and pressing enter the device will now be aware of the network. From the UI on the device if you go into the Network settings you can now select that hidden network. It will prompt you for the password and you’ll be connected.

Related Affiliate Links

Windows 10 for the Internet of Thing, Book

Dragonboard 410c, A tiny board compatible with Windows 10 IoT with integrated GPS

Minnowboard Turbot, another Windows 10 IoT Compatible board

Raspberry Pi Starter Kit

TypeScript in Tizen

I was writing a program to run on my television and encountered a scenario that I’ve encountered many times before; an HTML enabled device supports a JavaScript standard that is older than the one that I would like to use. The easiest workaround for this is to use a tool that will compile from a more recent version of JavaScript (or something similar) back to the version that is supported by the hardware. This is something I’ve done when developing for BrightSign and other devices.

For targeting the Tizen based Television I decided that I would use TypeScript to accomplish this; in addition to getting access to some more recent features that can be found in JavaScript there’s we also get type checking.

A bit of work was required to get this working though. On my first attempt I tried includint the TypeScript files in the same folder as the project. This doesn’t work;when the project is being compiled the compiler will try to take these files and package them in the solution. This isn’t something that we want to happen. It’s necessary to have these files in a folder that is outside of the project folder to prevent this from happening. I moved the files and made a TypeScript configuration file that specified the destination to which I wanted the resulting JavaScript files moved.

{
  "compilerOptions": {
    /* Basic Options */
    "target": "es2015",
    "module": "commonjs",  
    "sourceMap": false,   
    "outDir": "../tizenWorkspace/projectName/js"
    "strict": true,                           
    "noImplicitAny": true,                 
  }
}

This almost works. The next problem encountered is that when there is a reference to anything on the tizen object the compiler will complain about it net having been declared. The tizen object, not being a web standard object, is not something that is recognized by the compiler. There are two ways to handle this project. A work around would be to declare the tizen object as being of type any. With this declaration the compiler will just ignore what ever we do with the object and not complain.

I made a TypeScript definition file named tizen.d.ts in which to place my definitions. TypeScript already has an understanding of the interface provided by the Window object. To augment this I declare another interface that will be merged with the understanding that TypeScript has and added a definition for the tizen member there.

declare	interface Window {  tizen:any }

That works, but that’s also eliminating some of the type safety features that that TypeScript has to offer. Instead of working around the problem I wanted to address it. I wanted to provide the type definitions for the Tizen object.

There’s a project called Definitely Typed in which contributors make TypeScript definitions that can be downloaded and shared to other developers that are targeting the same environment. At first glance there appears to be existing entries for targeting Tizen within the collection. But upon further inspection it turns out that the definitions that are there (at the time of the writing of this post) are for targeting a cross development tool that also supports Tizen. that’s not what I needed. Instead of relying on community provided definitions I’ll have to make my own. When I’m done though I may have a definition file that could be shared through Definitely Typed. Since that repository is constantly being updated I would encourage seeing what it has to offer before using the code that I provide here.

declare	interface Window {  tizen:ApplicationManager}

This is when I start my descent down the rabbit hole. To define the ApplicationManager interface that is implemented by the tizen object there are a number of other interfaces that must be defined. Those interfaces have dependencies on other interfaces.

The interfaces for the various objects are documented and can be found on a Tizen.org page. Browsing through it there are some types mentioned that ultimately are strings of some type of another. Within TypeScript we can make a declaration that is similar to a typedef for equating some custom type to another.

type ApplicationId = string;
type ApplicationContextId = string;
type PackageId = string;

There is also a frequently used callback type for successes and errors of callbacks. The links to the documentation for the functions’ call signatures are broken taking me to a 404 page. I was more generic with defining these in my type definitions until I can get the specifics of the actual accepted call signatures.

type SuccessCallback = (...args: any[]) => void;
type ErrorCallback = (...args: any[]) => void;

The rest of the definitions are interfaces and follow the same patterns. I’m showing a few of the interfaces closer to the root of the definitions.

declare	interface Window {  tizen:ApplicationManager}
declare var tizen:tizenInterface;

interface tizenInterface {
    application:ApplicationManager;
}

interface ApplicationManager { 
    getCurrentApplication():Application;

    kill(contextId:string,
              successCallback:SuccessCallback,
              errorCallback:ErrorCallback):void ;

    launch( id:string, //ApplicationId
                successCallback:SuccessCallback,
                errorCallback:ErrorCallback):void;
    launchAppControl(appControl:ApplicationControl,
                        id?:ApplicationId, //ApplicationId
                          successCallback?:SuccessCallback,
                          errorCallback?:ErrorCallback,
                          replyCallback?:ApplicationControlDataArrayReplyCallback):void ;
     findAppControl(appControl:ApplicationControl,
                        successCallback:FindAppControlSuccessCallback,
                        errorCallback:ErrorCallback):void;

    getAppsContext(successCallback:ApplicationContextArraySuccessCallback,
                        errorCallback:ErrorCallback):void ;
    getAppContext(contextId:string):ApplicationContext;
    getAppsInfo(successCallback:ApplicationInformationArraySuccessCallback,
                     errorCallback?:ErrorCallback):void;
    getAppInfo(id?:ApplicationId ):ApplicationInformation;
    getAppCerts(id?:ApplicationId ):Array;
    getAppSharedURI(id?:ApplicationId ):string;
    getAppMetaData(id?:ApplicationId ):Array;
    addAppInfoEventListener(eventCallback:ApplicationInformationEventCallback):number;
    removeAppInfoEventListener( watchId:number):void ;    
}

There are a lot more objects that could be defined for Tizen. If you’ve come along this article checkout the DefinitelyType archives first. If you don’t find Tizen devinitions there you can download the version of the video that I have from here.

What is .Net

.NetFramework

I have some .Net related content that I plan to post and thought that I would revisit this question.

It’s a question I find interesting in that the answer has changed slightly over the year. In the earliest years it was a branding for technologies that were not necessarily related to each other; Windows .NET Server and Windows .NET Messenger are two products that had the branding at one point. But let’s not walk down memory lane and jump straight into the answer.

.NET is still a branding but the technologies with the branding are related to each other. Microsoft uses the branding on their Common Language Runtime (CLR) products. That answer only has kicked the can down the road. What is the CLR?

The CLR is a virtual machine component. Executables targeting the CLR don’t necessarily contain any code that is native to the processor on which they are running (though it may contain native code, but let’s ignore that for a moment). CLR binaries can be distributed with no processor dependent executable code within them. At runtime when the code is being executed it is converted to machine code as needed. Because of this the same program can be run on machines that have different processor architectures. The computer on which a program is running needs to have the runtime that is specific to it’s architecture and operating system.

This system might sound familiar as modern Java does something similar. There was a time when Microsoft was invested in Java virtual machines and made the first Java runtime that compiled the Java binary to machine language. The entity that owned Java at the time (Sun) wasn’t happy about this and they took Microsoft to court for deviating from the standard of how Java virtual machines worked and for using the Internet as a method of distribution among other reasons. This disagreement might sound petty, and in part it was. But there were good reasons for their position that I’ll present in another post. But this interaction added weight to the argument that Microsoft should have their own virtual machine. They also made their own programming languages (C# and Visual Basic .NET) and a few CLRs for x86, x64, and for their mobile devices.

The CLR, also known as the .Net Framework has seen several updates over the years. Microsoft eventually decided to make the CLR open source. This contributed to another CLR implementation being created named Mono which allowed .Net Framework applications to run on Linux and Mac.

If you look up .Net now you’ll find a few .NET systems listed.

  • .Net Standard
  • .Net Core (2016)
  • .Net Framework (2002)
  • ASP.Net / ASP.Net Core

What are these?

.Net Standard is a specification of the set of APIs that are expected to be in all implementations of the .Net Framework. Think of this as analogous to an interface; .NET standard itself isn’t an implementation. If you make an application that sticks with these APIs then it will have a wide range of compatible targets.

For the .Net Framework only one version of the framework can be installed at a time. Microsoft generally kept backwards compatibility, but it wasn’t perfect. Since a system could only have one version of the Framework installed in corporations updating the Framework had to be a company level decision.

.Net Core was made to contain the most common features of the .Net Framework, but has a few new features installed. It was made with multiple operating systems in mind and multiple versions in mind.  A system can have multiple versions of .Net core installed and they can run side-by-side.  From hereon Microsoft will be putting efforts on improving .Net Core. The .Net Framework will continue to support the .Net Framework but don’t expect to see new features in it; the new features will be coming to .Net core. There are a lot of legacy functionality from the .Net Framework that did not get ported over to .Net core in the interest of performance and compatibility.

ASP (Active Server Pages) is the name for Microsoft’s Web development system. Some of the earlier versions used a language that was similar to Visual Basic (yuck). The first version of ASP that supported .NET was called ASP.NET. ASP.NET used the .Net runtime and the more recent version supports the .NET Core runtime.  Traditionally ASP pages were hosted within IIS (Internet Information Services), a Windows component for hosting web pages. Wit the modern versions while this is still an option ASP.NET pages can be hosted outside of IIS too.

If you are starting a new desktop .Net project and don’t know what version to use the safe choice will generally be .Net Core. In my opinion the best feature is its ability to run on multiple systems (Mac, Linux, Windows, and varios IoT devices including the Raspberry Pi).

 

Trying to learn C# and .Net Core? This is a book I would recomend.

 

Credit Cards and Magnetic Strips in the USA

Yes, in the USA we still use magnetic strip on credit cards.

I’ve had some interactions with some people online more than once when I found myself explaining this. One of the more recent incidents was about a phone that used magnets to keep something in place. I had commented that with such a phone I want to make sure that it didn’t get near my bank cards because of the magnets. I received responses of confusion from this. I’ll address some of the frequent responses.

  • Why does it matter if it gets near a magnet?
    – If a credit card is in contact with a magnet that is strong enough or for long enough then the data on the strip can be damaged making it unusable in magnetic strip readers.
  • Why not use contactless payment?
    Banks in the USA for the most part don’t issue contacless cards. I’ve checked for them and have found they do exists if you get a very “product” from that bank. For example if you have a card from Main Street Bank Visa (a bank name I’ve just made up) then it might not be contactless, but if you got the Main Street Bank Visa Barbie Edition (also a made up card) then it may have the contactless feature.
  • Why not use the Chip and Pin
    The chip isn’t accepted at some POS terminals. For credit cards in the USA we almost never use pins. For bank cards associated with a checking account though a PIN must be entered if the card is processed as a bank card instead of a credit card. High volume stores including fast food also prefer not to use the pin. The amount of money they can make is heavily dependent on how quickly they can complete the purchase process and they risk loosing money on missed sales than fraud at heavy times.

Typically, people in the USA that do have experience with contactless payment have it through Apple Pay or Samsung Pay. While Android Pay had been present years before it was hamstrung by three of the four nationwide phone carriers here. They decided to block Android pay and created their own payment system. Their system was called ISIS and based on the American Express SERVE cards. But things happened in the news and the ISIS name was no longer a name that was marketing friendly.  They changed the name to Softcard but the effort ultimately failed.

I myself was able to use Android Pay for some time since I had a Google phone that wasn’t distributed through the carriers. It worked fine until Apple Pay was released. When Apple Pay was released many major retailers decided to deactivate their contactless receivers. They wanted to have in on mobile app payments and they blocked such payments all together. Now, if I entered a store even if I could visually identify that a contactless payment terminal was present I didn’t know that it worked. At one point it failed more times than not and it was easier to just not bother with it.

Why not use the app that the retailer made instead of the phone payment app? There are two reasons. The first, is the retailers didn’t have such an app. They were starting development of their systems and blocked other contactless payments while they prepared their own. When such apps were available my personal motivation was that data breaches happen regularly and it is safer to minimize the number of accounts in which personal information appears. Even when an account is closed in the USA retailers may often retain records of the information; it is still vulnerable to a breach after an account with the retailer has been closed.

With that said, I’m hoping to see a shift in how credit cards are handled in the USA. There’s a high rate of credit card fraud in the USA and several times within a year my associates and I find that we must get a card replaced because of data breaches.