Introduction to Direct 2D: DirectWrite

In my previous installment of this series I was rebuilding a video game interface.  The interface resembled my target, but did not have any text.  How can I render text with Direct2D?  The answer is: DirectWrite.

What is DirectWrite?

DirectWrite is a GPU accelerated text rendering API that runs on top of Direct2D.  It originally shipped with Windows 7 and (as of the publishing date of this article) receives updates from Microsoft in the Windows 10 updates.  It is a great solution for rendering text within a DXGI based program.  Surfaces are generally dependent on the device.  DirectWrite resources are device independent.  There is no need to recreate them if the surface is lost.

Text Format

There is no default text style.  When text is being rendered it must have a definite font, style, size, and weight.  All of these text attributes are packaged in IDWriteTextFormat.  If you have worked with CSS think of IDWriteTextFormat as being a style for text.  In this interface you select the font face, weight, size, and text alignment.

TOF(_pWriteFactory->CreateTextFormat(
	TEXT("Segoe UI"),	//font family
	NULL,				//font collection
	DWRITE_FONT_WEIGHT_EXTRA_BOLD,
	DWRITE_FONT_STYLE_NORMAL,
	DWRITE_FONT_STRETCH_NORMAL,
	40.0f,
	TEXT("en-us"),
	&_pDefaultFont
));

Drawing Text

I will discuss two ways to render a text string: DrawText and DrawTextLayout.  The easier method of rendering a text string is to use ID2D1RenderTarget::DrawText.  This method accepts a text format object and a bounding rectangle.  It renders the text string at the location specified by the bounding rectangle.  It also accepts optional arguments that affect text layout and metrics.  This is the easiest of the two methods for rendering text, but it does not support having mixed styles within the same text string.  For that you would need to use a text block rendered in IDWriteTextLayout with ID2D1RenderTarget::DrawTextLayout (discussed in the next section).  Text rendered with this method will appear center-aligned to the bounding rectangle in which it is placed.

std::wstring test = TEXT("TEST");
D2D1_RECT_F textLocation{20,200,800,800};
_pRenderTarget->DrawTextW(
	test.c_str(), test.size(), //The text string and length
	_pDefaultFont.Get(),  //The font and style description
        &textLocation,  //The location in which to render the text
	_pBackgroundBrush.Get(),  //The brush to use on the text
	D2D1_DRAW_TEXT_OPTIONS_NONE, 
	DWRITE_MEASURING_MODE_NATURAL
);

Text Layout

A text layout serves a similar function as a text block in that it is used for showing text.  The primary difference is that a text block has a one-to-one relationship with what is rendered on the screen and a text layout (through the IDWriteTextLayout interface) can be rendered several times.  If there were labels that were used repeatedly with the same text, they could be implemented by creating an IDWriteTextLayout object and rendering it several times.

//In InitDeviceIndependentResources()

std::wstring  stringResult = L"RESULT";
TOF(_pWriteFactory->CreateTextLayout(
	stringResult.c_str(),
	stringResult.size(),
	_pDefaultFont.Get(),
	400, 90, 
	&_pTextLayout
));

//In OnRender()
_pRenderTarget->DrawTextLayout({ 00,0 }, _pTextLayout.Get(), _pBackgroundBrush.Get());
ID2D1AppWindow_DWrite_TextLayout
The resulting text from the above code

Unlike text rendered with ID2D1RenderTarget::DrawText, text rendered with IDWriteTextLayout can have styling applied to specific ranges of letters within the text.  If you need to mix text styles, use IDWriteTextLayout instead of using ID2D1RenderTarget::DrawText.  Create a TextLayout initially using a text format that covers the majority of the text in your layout.  Where deviations to your selected default should apply, create a DWRITE_TEXT_RANGE instance.  DWRITE_TEXT_RANGE contains the index of the starting character for the new text styling and the number of characters to which it should be applied.  Here are some of the functions for adjusting text styling.

When creating the IDWriteTextLayout a width and a height for the text area are needed.  Content rendered within this rectangle will automatically be wrapped.

std::wstring  stringResult = L"RESULT";
TOF(_pWriteFactory->CreateTextLayout(
	stringResult.c_str(),
	stringResult.size(),
	_pDefaultFont.Get(),
	400, 90, 
	&_pTextLayout
));
DWRITE_TEXT_RANGE range{ 2,3 };
_pTextLayout->SetFontWeight(DWRITE_FONT_WEIGHT_EXTRA_LIGHT, range);

ID2D1AppWindow_DWrite_TextLayout_range

Applying DirectWrite to the Project Interface

Where I left off, I had used Direct2D to render color to areas in which the interface will show information.

D2DAppWindow_sonicInterface

The next step is to apply text to the interface.  This interface will ultimately be used for performing batch processing of some media files.  The details and implementation of that processing are not shown here and are not important since this is only showing an interface.  The text populating the interface in this example is for demonstrative purposes only.

As shown in creating the interface, I am making a list of shape types to render.  In the call to OnRender() there is a case statement that will make the necessary calls depending on the next shape type in the list.  Between the last post and this one I changed my shape representation to have a shape base class and subclasses instead of having a single struct with the additional data needed packaged in unioned elements.

I am using  IDWriteTextLayout to render text instead of ID2D1RenderTarget::DrawText. Since DrawText centers the text within the bounding rectangle I could not get the layout to be what I wanted. Using IDwriteTextLayout in combination with layout options I was able to achieve the layout that I was looking for.

With the development of a hierarchy for the types of items that I am rendering, I am almost tempted at this point to just start defining a control hierarchy.  While that would have utility it may also distract from the concepts being shown here.  A control hierarchy may be written some other day.  For now the hierarchy I am using for shapes is shown is this code.

interface  IDispose {
	virtual void Dispose() = 0;
};

struct Shape: public IDispose {
	Shape() {}

	Shape(ShapeType shapeType, PaletteIndex paletteIndex)
	{
		this->shapeType = shapeType;
		this->paletteIndex = paletteIndex;
	}
	virtual void Dispose()
	{

	}
	std::wstring tag;
	ShapeType shapeType;
	PaletteIndex paletteIndex;
};

struct TextShape :public Shape {
	TextShape() {}
	TextShape(std::wstring text, D2D1_RECT_F location, TextStyle textStyle = TextStyle_Label, PaletteIndex palette = PaletteIndex_Primary, 
		DWRITE_TEXT_ALIGNMENT textAlignment = DWRITE_TEXT_ALIGNMENT_LEADING, 
		DWRITE_PARAGRAPH_ALIGNMENT paragraphAlignment = DWRITE_PARAGRAPH_ALIGNMENT_CENTER)
		:Shape(ShapeType_Text, palette)
	{
		this->text = text;
		this->location = location;
		this->textStyle = textStyle;
		this->paragraphAlignment = paragraphAlignment;
		this->textAlignment = textAlignment;
	}

	void Dispose() override 
	{
		this->textLayout = nullptr;
	}
	std::wstring text;
	D2D1_RECT_F location;
	TextStyle textStyle;
	ComPtr textLayout;
	DWRITE_PARAGRAPH_ALIGNMENT paragraphAlignment;
	DWRITE_TEXT_ALIGNMENT textAlignment;

};

struct RectangleShape : public Shape {
	RectangleShape() :Shape(ShapeType_Rectangle, PaletteIndex_Primary) {

	}
	RectangleShape(D2D1_RECT_F rect, PaletteIndex p) :
		Shape(ShapeType_Rectangle, p)
	{
		this->rect = rect;
	}
	D2D1_RECT_F rect;
};

struct EllipseShape : public Shape {
	EllipseShape():Shape(ShapeType_Ellipse, PaletteIndex_Primary)
	{}
	EllipseShape(D2D1_ELLIPSE ellipse, PaletteIndex p = PaletteIndex_Primary)
		:Shape(ShapeType_Ellipse, p)
	{
		this->ellipse = ellipse;
	}

	D2D1_ELLIPSE ellipse;
	
};

The OnRender() method has now been modified to reflect the new struct hierarchy and now has additional code for rendering text.

switch ((*current)->shapeType)
{
	case ShapeType::ShapeType_Rectangle: 
	{
		std::shared_ptr r = std::static_pointer_cast(*current);
		_pRenderTarget->FillRectangle(r->rect, brush.Get());
		break;
	}
	case ShapeType_Ellipse: 
	{
		std::shared_ptr e = std::static_pointer_cast(*current);
		_pRenderTarget->FillEllipse(e->ellipse, brush.Get()); 
		break;
	}
	case ShapeType_Text:
	{
		std::shared_ptr t = std::static_pointer_cast(*current);
		if (t->textLayout)
		{
			ComPtr format = _pDefaultFont;
			//_pRenderTarget->DrawTextW(t->text.c_str(), t->text.size(), format.Get(), t->location, brush.Get());
			D2D1_POINT_2F p = { t->location.left, t->location.top };
			_pRenderTarget->DrawTextLayout(p, t->textLayout.Get(), brush.Get());
		}
		break;
	}
	default:
		break;
}

And now the UI looks more complete.

D2D1_UI_With_Text

There are adjustments to be made with the margins on the text and positioning.  Those are going to be ignored until other items are in place.  One could spend hours making adjustments to positioning.  I am holding off on small adjustments until the much larger items are complete.

The complete source code can be found in the following commit on Github.

https://github.com/j2inet/CppAppBase/tree/5189359d50e7cdaee79194436563b45ccd30a88e

There are a few places in the UI where I would like to have vertically rendered text.  A 10 to 15 degree tilt has yet to be applied to the entire UI.  I would also like to be able to display preview images from the items being processed.  My next posts in this series will address: Displaying Images in Direct2D and Applying Transformations in Direct2D.

twitterLogofacebookLogo   youtubeLogo

Introduction to Direct2D:Part 1

DirectX is a family of APIs that provide functionality related to multimedia. Some of the APIs are focused on sound, some on input, and some on graphics. Today I want to present on of the graphical APIS based on the Direct X Graphics Infrastructure (DXGI) known as Direct2D. The Direct2D API is for rendering graphics in 2D (as suggested by the name). The API takes advantage of the features in the graphics card for accelerating the rendering.

When working with Direct2D you don’t create the objects directly. Instead you will use a factory method that creates the objects and returns interfaces to the object or you will use those returned interfaces to requested additional objects. The interfaces returned by the Direct2D APIs are COM pointers. COM, or Component Object Model is an interface for making components that interact with each other. The standard has existed since 1993. Despite being 30 years old it is important to Windows and used more than one might think within .NET.  There are books that cover how COM works. Some of them are old, but don’t let that make you think that the book’s information isn’t relevant. A full discussion of COM is outside of the scope of this post.

To get an initialized window with which to work I’ll be using the sample Win32/C++ code that I presented in the post. See that post for more information about how the base window works. In my derived class I’ll begin initializing Direct2D specific objects.

Adding Update and Render Methods

The AppWindow sample implements a message pump that uses PeekMessage instead of GetMessage. This allows the application to do other things instead of halting when there are no messages to be processed. When there are no messages a method named Idle() is called. This method will be repurposed for adding an update/render loop.

There will be 4 methods added: Update(), OnUpdate(), Render(), and OnRender(). Both OnUpdate() and OnReader()  are virtual; these methods contain code that could be completely replaced or overridden if this code were re-purposed for another application.  The other two methods, Update and Render, contain code that needs to be called on every update cycle and rendering cycle that I would not want affected when derived applications are overriding the rendering or the updating. The Update method also is querying the performance timer to calculate the amount of time that has passed since the last update. This information could be used to know how far to move an object in a scene on the next frame update or to calculate the FPS.

D2D1AppWindow_Idle_cpp

Initializing D2D

You’ll see two variations of the D2D Interfaces. One version is prefixed with D2D1 (note the numeral 1 at the end). The D2D1 interface was released with Windows 8 and is the one that I’ll be using here.  There’s two interfaces that we need to initialized. ID2D1Factory, which will be the object from which we directly or indirectly make the other D2D1 objects, and ID2D1HwndRenderTarget, which is an interface that we use for painting our scene onto the window object created by the application base.  As we create objects something to keep in mind is two classifications into which DXGI objects can be classified (this isn’t limited to Direct2D, but applicable to other DirectX graphical APIs). An object could be a Device Dependent resource or a Device Independent Resource. What’s the difference?

Device Dependent and Device Independent Resources

Device in this context refers to the video rendering device. This will usually be the video card but in some scenarios could be a purse software device or even refer to a resource that is hosted on another computer. To keep things simple let’s just view Device as being synonymous with video card or video adapter. Some resources allocate resources on the device (video adapter). But these resources could be lost at any moment if the state of the device changes (changing the resolution of the device will do this) or if the device gets disconnected (yes, a video adapter could be disconnected while the program is still running). If this occurs then none of the device dependent resources is valid.

Device Independent Resources do not allocate resources on the video card. These resources use RAM allocated to the CPU and are not affected by changes int he display configuration.

When a device is lost all of the device dependent resources must be released and recreated. In some programs a valid and simple way to do this is to simply terminate the program and start a new instance of the program. While an easy solution the program will loose any context of what the user was doing unless the program also saves state and restores this context. For scenarios in which I’ve used this solution the access to the computer hardware was restricted making the loss of a device an extremely low frequency event (something that might happen once a day at 3am in the morning while automated maintenance is doing something to the computer).

Examples of objects that are device independent resources include the ID2D1Factory object that is used for creating the other objects, Geometry objects used for describing shapes (objects that implement the ID2D1Geometry interface), and stroke style objects (ID2D1StrokeStyle).

Examples of Device Dependent Resources include ID2D1Device (which refer to the device that itself was lost), brush resources (implement IBrush), and ID2D1Layer (for creating rendering layers).

Object Creation

When initializing my D2D objects I have two methods in which the initialization occurs. The method InitDeviceResources() initialized the deveice dependent resources. InitDeviceIndependentResources()  creates the other resources. During a program’s lifecycle InitDeviceIndependentResources() will only be called once. InitDeviceResources() will be called at least once but could be called many other times.  To ensure that these resources are released there’s an additional method, DiscardDeviceResources(), that releases all of the device dependent resources. Since the ComPtr type is being used setting the pointer to nullptr will result in the resources being released.

D2DWindow_Init

The call to AppWindow::Init() results in the creation of the window. The initial implementations for InitDeviceIndependentResources() and InitDeviceResources() will create the ID2D1Factory object and create a RenderTarget for rendering to the screen.

d2dAppWindow_InitDeviceResources_cpp

The InitDeviceIndependentResources will create the D2D1Factory(). The other resources will be created in InitDeviceResources() and cleared in DiscardResources();

Loosing the Device

During the render phase of the application the various calls made result in commands being queued to be executed on the video card to render our scene.  The commands are not executed until we end our drawing (which actually ends adding commands to the queue). If the connection to the device were lost when we end our drawing the function call to EndDraw() returns a value that can be examined to detect that the device has disconnected.  If the returned value is D2DERR_RECREATE_TARGET the resources must be reinitialized.

d2dAppWindowDeviceLost_cpp

Rendering

In DXGI applications  drawing is performed using render target objects. There are several types of render targets. We will be using an ID2D1RenderTarget in this sample application.  We have a render target that renders to the application window. Render targets could also render to off screen surfaces, textures, or remote machines.

In the previous code sample a render target was already being used to clear the application window and fill it with the color blue. The general usage pattern will look like the following.

ID2D1RenderTarget* myRenderTarget;
myRenderTarget->BeginDraw();
//call draw functions here
myRenderTarget->EndDraw();

Let’s draw a square. There are three things that we need; a description of the square’s geometry, a brush to draw the square with, and a color to assign to the brush. If you’ve used Win32’s GDI (Graphical Device Interface) you’ve seen the term “Brush” used before in a similar sense. A Brush is a system resource that contains information on what color to use when drawing something. A brush could be for rendering a solid color or it could be used to use portions of a bitmap to fill a shape.

The struct D2D1_RECT_F is used to describe the square here (also available, D2D1_RECT_U and D2D1_RECT_L). This structure contains the elements left, top, right, and bottom. Any D2D1_RECT_F instances that are created reside within the CPU’s memory.  The color to be used with the brush is also assigned in a struct that only uses the CPUs memory. The type D2D1_COLOR_F  is used here, with the values red, green, blue, and alpha of type float.

For the brush there are GPU resources that will be consumed. It is a device dependent resource. We request that a brush be created and receive a pointer to its interface by calling ID2D1RenderTarget::CreateSolidColorBrush. The method arguments are the color that the brush should be and a pointer to the object that will receive the interface address.

d2d1AppWindow_CreateResources

To render the square the OnRender() method requires one or two additional calls. When rendering a geometry there are two types of methods available. One type will outline the geometry and the other is to fill the shape (which results in a solid shape). To render a geometry that has both a solid fill and an outline first use the fill method and then the draw method as I do below.

D2D1AppWindow_DrawSquare

There are a number of other geometries that ID2D1RenderTarget supports; Ellipses, rectangles, rounded rectangles, text, and other constructed geometries.

Resizing the Window

The square renders, but if you resize the window you’ll see a problem. As the window is shrunk on one axis or the other the square will shrink on that axis; the square will become a rectangle with uneven sides. Fixing this is easy enough. The RenderTarget must be updated to reflect the new window size. Since this sample application is derived from the CppAppBase that I presented in a previous post there exist a resize method that can be overridden to to update the render target. ID2D1RenderTarget::Resize accepts a D2D1_SIZE_U struct and updates the render target size accordingly.

void D2D1AppWindow::OnResize(UINT width, UINT height)
{
	this->_size = D2D1_SIZE_U{ width, height };
	if (_pRenderTarget)
	{
		_pRenderTarget->Resize(_size);
	}
}

There is still an annoying behaviour. While the window is being resize the screen is blank. Part of the reason for this is that the rendering only occurs when there are no messages to process and resizing generates messages. One of the messages generated while resizing is a WM_PAINT message; this is telling the application to redraw itself. If we call OnRender() in response to a WM_PAINT message this will handle the screen updates while resizing. The AppWindow base class has an OnPaint() method that is called when a WM_PAINT message comes through. Overriding this method and adding a call to OnRender() results in a window that doesn’t go blank when resizing.

void D2D1AppWindow::OnPaint()
{
	this->OnRender();
}

Rendering a Shape List

One could write code to individually render each one of the primitives needed to make up a more complex scene. For applications with complex scenes doing this is especially undesirable. Instead a better solution is have the primitives that must be called encoded within a file. If a change needs to be made to a scene the change would be made with the file instead of in code.

For the next sample I”m going to move halfway to such a solution. I’m making a list of primitives to be rendered and am populating the list from code. With a bit more work the data could be externalized. I’m using the list to start rebuilding an interface. I was playing a video game on my Nintendo Switch. The game had a screen that displayed between levels that shows how well someone did. I thought the interface was both simple and æsthetically pleasing.

ES4msOWUcAAzhAP

Creating something similar to this is my goal. It is composed of several rectangles, a few concentric circles, and text. Were the interface all the same type of geometry elements this could be defined with a simple list of attributes about the rectangles. Since there is more than one type of geometry the list elements must identify a geometry type. I’ve made a struct that has a field that identifies the shape type (Rectangle or Ellipse for now),  identifies which of the colors in my 3 color palette that the item should be, and has the information for the shape itself. The rectangle data and ellipse data are in a union to minimize the amount of memory used for each item.

enum   PaletteIndex {
	Primary = 0,
	Secondary = 1,
	Background = 2
};

enum ShapeType {
	ShapeType_Rectangle = 0, 
	ShapeType_Ellipse = 1
};

struct ColoredShape
{
	static ColoredShape MakeEllipse(D2D1_ELLIPSE e, PaletteIndex p)
	{
		ColoredShape c{ 0 };
		c.shapeType = ShapeType::ShapeType_Ellipse;
		c.ellipse = e;
		c.paletteIndex = p;
		return c;
	}

	static ColoredShape MakeRectangle(D2D1_RECT_F r, PaletteIndex p)
	{
		ColoredShape c{ 0 };
		c.shapeType = ShapeType::ShapeType_Rectangle;
		c.rect = r;
		c.paletteIndex = p;
		return c;
	}

	union  {
		D2D1_ELLIPSE ellipse;
		D2D1_RECT_F rect;
	};
	ShapeType shapeType;
	PaletteIndex paletteIndex;
};

The factory methods make it easier to initialize instances of the value. I’ve got a vector that holds these structs. It is populated with the data for each rectangle and ellipse to be rendered. The items will be rendered in the same order that they show in the list. If two items overlap the latter of the items to be rendered will be on top.

void D2D1AppWindow::InitDeviceIndependentResources()
{
	TOF(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, IID_PPV_ARGS(&_pD2D1Factory)));
	_primaryColor = D2D1::ColorF(0.7843F, 0.0f, 0.0f, 1.0f);
	_secondaryColor = D2D1::ColorF(0.09411, 0.09411, 0.08593, 1.0);
	_backgroundColor = D2D1::ColorF(0.91014, 0.847655, 0.75f, 1.0);
	_mySquare = { 20, 20, 30, 30 };

	
	_shapeList.push_back(ColoredShape::MakeRectangle({0,0,456,104 }, Primary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,128,456,550 }, Primary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 488,0,508,110 }, Secondary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 488,40,900,115 }, Secondary ));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,130,850,180 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,130,850,180 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,190,850,240 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,190,850,240 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 530,250,850,300 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 690,250,850,300 }, Primary));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,350,600,450 }, Secondary ));
	_shapeList.push_back(ColoredShape::MakeRectangle({ 0,350,600,450 }, Secondary));

	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,60, 60 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,57, 57 }, Background));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,55, 55 }, Secondary));
	_shapeList.push_back(ColoredShape::MakeEllipse({ 570,400,52, 52 }, Background));

	_shapeList.push_back(ColoredShape::MakeRectangle({ 476,470,870, 550 }, Secondary));
}

In the OnRender() method this list is iterated and each element is drawn.

void D2D1AppWindow::OnRender()
{
	HRESULT hr;
	if (!_pRenderTarget)
		return;
	_pRenderTarget->BeginDraw();
	_pRenderTarget->Clear(_backgroundColor);
	_pRenderTarget->FillRectangle(&_mySquare, _pPrimaryBrush.Get());
	_pRenderTarget->DrawRectangle(&_mySquare, _pSecondaryBrush.Get());

	for (auto current = _shapeList.begin(); current != _shapeList.end(); ++current)
	{
		ComPtr brush;
		switch (current->paletteIndex)
		{
		case Primary: brush = _pPrimaryBrush; break;
		case Secondary: brush = _pSecondaryBrush; break;
		case Background: brush = _pBackgroundBrush; break;
		default: brush = _pPrimaryBrush;
		}
		switch (current->shapeType)
		{
		case ShapeType::ShapeType_Rectangle:_pRenderTarget->FillRectangle(&current->rect, brush.Get()); break;
		case ShapeType::ShapeType_Ellipse:_pRenderTarget->FillEllipse(&current->ellipse, brush.Get()); break;
		}
		
	}
	
	hr = _pRenderTarget->EndDraw();
	if (hr == D2DERR_RECREATE_TARGET)
	{
		// The surface has been lost. We need to recreate our
		// device dependent resources. 
		this->DiscardDeviceResources();
		this->InitDeviceResources();
	}
}

Running the program gives me something that looks like it was inspired by the video game interface.

D2DAppWindow_sonicInterface

There are similarities, but there are also plenty of differences. Aside from the missing text in the video game interface the text is rotated slightly. With the way rectangles are defined now there’s no apparent way to render a rectangle that isn’t aligned with the X or Y axis. To move forward we need to know how we can apply transformations to the rendering and how to render text. I’ll continue there in the next part of this series.

The complete source code for the program can be found on github.

https://github.com/j2inet/CppAppBase

 

Win32/C++ Application Base

Win32/C=++

Win32 is necessary for development in many current technologies.  Some of my current projects utilize Win32/C++ applications.  Many of them use a similar pattern for starting the application.  Initializing the window that hosts these applications is not particularly interesting, but it is a foundation block for some of my applications that I will share in the future.

In Win32 UI programs there will be at least one function defined that is known as being of type WNDPROC.  A WNDPROC is a callback function.  It is the primary function through which Windows will communicate to an application various events.  When a user moves their mouse over an application WNDPROC is called.  When the user presses a key WNDPROC is called.  When an application is being notified to render its UI again WNDPROC is called.  The call signature for WNDPROC is shown below.

LRESULT CALLBACK WindowProc(
  _In_ HWND   hwnd,
  _In_ UINT   uMsg,
  _In_ WPARAM wParam,
  _In_ LPARAM lParam
);

The first argument in this function is a handle to the window that is the intended recipient of the message.  An application could have more than one window.  But an application with only one window will have the same value.  The second argument is a numerical value that identifies the event or message being sent.  The constants that define possible vales generally are prefixed by WM_ (meaning Windows Message).  Examples of such values include: WM_COMMAND, which is generally the result of a button being clicked; WM_PAINT, which tells the application to redraw its UI; WM_SIZE, meaning that the window has been resized; and WM_CLOSE, meaning that there was a request to close the window.

Because of the central position that the WNDPROC function plays in receiving notifications from the operating system, if all responses to the operating system were handled here the function would quickly grow big.  I have seen this done, and it can get to be a nightmare on shared projects.  One way to deal with this is to have the WNDPROC function act only as a method that routes these incoming messages to other functions that handle the method.  A simple way to implement this is with a switch statement where each case does a minimal amount of work before passing the message on to a function specifically made to handle the message.

That is the solution used in the projects I plan to share in the future.  I created an abstract base class named AppWindow that creates an application window; has a few methods for handling certain Windows messages; and has case statements for calling those methods in response to a Windows message.

Windows communicates these messages to an application through a message queue.  An application must retrieve the next message to begin processing it.  If an application does not retrieve any messages for a certain period of time, then Windows assumes the application is locked up or busy and may give you a prompt to either terminate or wait for the program.  To keep messages going through the queue an application must implement a message pump.  In a simple implementation of a message pump two functions are called in a loop.  GetMessage which receives a MSG struct containing the information on a message and DispatchMessage to have the message handed off to the application’s WNDPROC for processing. 

If there are no messages available, calling GetMessage will result in the application waiting until there is a message.  During this wait, the main thread of an application will not use any CPU bandwidth.  In some applications, such as games or other applications that are continuously performing UI updates, using GetMessage is undesirable since it can cause the UI thread to wait.  In such applications, PeekMessage can be used instead. With PeekMessage, unlike GetMessage, if there are no messages available PeekMessage will not block.  Its return value indicates whether or not a new message was available.  If there are no messages to be processed the application can perform other tasks.

An application may modify a message within the time that it was retrieved with PeekMessage or GetMessage and when it is passed off through DispatchMessage.  You will see the use of a method named TranslateMessage which despite its name, does not perform the modification of any messages.  Instead, it creates new messages in response to certain virtual key messages and adds these to the message queue.  For the programs that I present, the specifics of what it does are not important and I will not be modifying any messages in the message pump.

I do not use any of the traditional Win32 UI elements in the programs that I will be sharing (buttons, labels, text boxes, list boxes, etc.) but those elements could easily be created within the application.  If I were making a framework for such controls, I would probably make classes for each one.  My base class does have a CreateButton and CreateLabel method that are used for debug purposes.

AppWindow_header
The AppWindow.h file

This class is abstract and cannot be directly initialized.  So, there first must be a derived class.  The only methods that the derived class must absolutely implement are the constructor and the method GetWindowsClassName().  The constructor must accept a HINSTANCE argument and pass it to the AppWindow base class constructor. GetWindowClassName() must return a unique string.  This string is going to be used for registering the window class.

DerivedWindow_h
Full implementation of a derived window

To make an application that runs, all that is left to do is create a derived application window, call its Init() method (which is where it actually creates its Window) and call its message pump.

derivedWindow_main_cpp

Running the application will result in a window showing a label and a button.  It does not do anything yet, which is what is desired.  The code samples to be posted in the future, require that there be an initialized window (which this provides).  The entirety of the code can be found on GitHub.

Relevant Github Commit

Latest Version

HRESULT and COM Exceptions

I have some posts queued that make use of the C++ exception classes that have been useful in my previous projects.  These classes were primarily used for logging. COM calls which return an HRESULT indicating the success or failure of the call.  These return values should be examined and action should be taken if there is a failure.  What type of action to take will depend on the scenario.  In some cases an application may fallback on alternative functionality.  In other cases the failure may be irrecoverable.  The worst thing to do is nothing at all.  An early failure could result in a problem that is not realized until later and require a painful debugging experience.

For the sample code that I share here, I will not implement full error handling to avoid distraction.  But to avoid undetected failures, I do not want to leave the values untouched.  In the sample code I will use an inline function named TOF (Throw On Failure).  TOF will throw an exception if there is a failed return value.  In most of the sample code presented here, the exception will not be caught.  When the exception is not caught, the program is designed to terminate.  The TOF function itself will return the HRESULT if successful.  The HRESULT is still available for further examination if needed.

The exception classes and the TOF function can be found on Github.

https://github.com/j2inet/CppAppBase/blob/master/src/common/Exception.h


 

TIP:Multi-GPU WebGL Performance

There’s a large screen iMac that I use for development a lot. It has a 5K screen. The advantage of this is that when I am working on a project that runs at 4K resolution I can develop it on this machine and still have some pixels left over for some dev tools.  But a problem with the machine is that it never had a great GPU. It pushes a lot of pixels but Apple had used a mobile GPU (which are usually lower powered). It’s also an older machine. The solution for this: an external GPU. It took a bit of fanagling but I got an Nvidia GTX 1080 working on the machine. When I tried testing out some WebGL shaders on the computer at full resolution the performance was awful! What was going on?

I thought that the performance was indicating that the external GPU wasn’t being used. I thought that starting Chrome on the secondary (external) screen would solve this problem. It wouldn’t. Thankfully the Windows 10 Task Manager also shows GPU usage. I found that no matter where I placed the window that the internal (weak) GPU was the one that was being used.

My hypothesis on what was going on here is that Chrome was creating surfaces on the primary GPU. To test this I changed the primary screen to be one of the screens attached to the external GPU. The results were promising. I got much better performance. But there was still a significant difference if I ran full screen on the built in screen compared to the external one. The Task Manager showed that the Windows Desktop Manager was using a lot more cycles when I ran the Chrome window on the internal screen. Basically the system was rendering using the external GPU and then copying the screen to the internal GPU for each cycle.  For my purposes this needs no solution. I connected the eGPU to a 4K display and did my testing from it.

If you’ve got multiple GPUs and wish for WebGL to run on a specific screen you’ll for now have to set the desired GPU to be your primary one.

 

egpu

Sonnet eGPU Enclosure

Changing the Default Tizen 5.0 Project for Samsung TVs

Tizen-Pinwheel-On-Light-RGB

When using Tizen Studio if you start a project from one of the templates for a TV you may find that the project won’t deploy to a Samsung Consumer TV. There are a couple of changes that can be made to take care of this.

One is to edit the config.xml. There are a couple of lines in it to be changed. There is an element named tizen:profile with a name attribute of “tv”. Change this to “tv-samsung”. The other is in a file that isn’t listed by the IDE named “.tproject”.  Under the Platform element is a text value of “tv-5.0”. Change this to “tv-samsung-5.0”. I’ve found that even on a TV running Tizen 4.0 that these changes are sufficient to work. Just don’t use any Tizen 5.0 features on a display that is running an older OS.

Related: Developing for older Samsung TVs

 

 

Using WITS for Samsung/Tizen TV Development

 

One of the development scenarios that makes me cringe is an environment in which the steps and time from changing a line of code to seeing its effect is high. This usually happens in an environment with specialized hardware, limited licenses, or sensitive configurations leading to the development machine (as in the machine on which code is edited) is not suitable or capable of running the code that has been written.  There is sometimes some aspect of this in cross platform development. While emulators are often helpful in reducing this, they are not always a suitable solution since emulators don’t emulate 100% of the target platform’s functionality.

When developing for TVs running Tizen(which will be more than just Samsung TVs) Samsung has made available a tool to reduce the cycles from changing code to seeing it run through a tool called WITS.

Setting up WITS

To Setup WITS first you need to have already installed and configured Tizen Studio and Node. The system’s PATH variable must also include the path to tizen-studio/tools and tizen-studio/tools/ide/bin (you’ll need to complete those paths according to the location at which you’ve installed Tizen Studio).  You’ll also need to already have a certificate profile defined for your TV.

The files that you need for WITS are hosted on git. Clone the files onto your machine.

git clone https://github.com/Samsung/Wits.git

Enter the Wits folder and install the node dependencies

cd Wits
npm install

Next the folder there is a file named profileInfo.json. The contents of the file must be updated to point to your profiles.xml for your certificate and the name of the certificate profile to use. Windows users, note that when ever you enter a path for Wits you will need to use forward slashes (/), not back slashes (\).  For my installation the updated file looks like the following.

{
  "name": "TizenTVTest2",
  "path": "C:/shares/sdks/tizen/tizen-studio-data/profile/profiles.xml"
}

 

Configuring Wits to Use Your Project

Wits needs to know the location of your project. Open connectionInfo.json. There is an array element named baseAppPaths. Enter the path to your Tizen application here.  If you would like to make things convinent within this file also set the “ip” element to the IP address of the TV you are targeting. This isn’t necessary since you will be prompted for it when running a program. But it will default to the value that you enter here.

Running your Project

From the command prompt while in the Wits directory use npm to start the project

npm start

You will be prompted for a number of items. The default values for these items comes from the connectionInfo.json file that you modified in the previous section. You should be able to press enter without changing the values of any of these elements.

PS C:\shares\projects\j2inet\witsTest\Wits> npm start

> Wits@1.0.0 start C:\shares\projects\j2inet\witsTest\Wits
> node app.js

Start Wits............
? Input your Application Path : C:/shares/projects/j2inet/MastercardController/workspace/SystemInfo2
? Input your Application width (1920 or 1280) : 1920
? Input your TV Ip address(If using Emulator, input 0.0.0.0) : 10.11.86.62
? Input your port number : 8498
? Do you want to launch with chrome DevTools? : Yes

 

A few moments later you’ll see your project running on the TV.

Deploying File Changes

This is where Wits is extremely convenient. If you make a change to a file the application will automatically update on the TV. There’s nothing you need to do. Wits will watch the project for changes and react to them automatically!

Connecting Windows 10 IoT Core to a Hidden Network

For some odd reason while Windows 10 IoT core has the capability to connect it to hidden networks it doesn’t expose this capability in its UI. Given it’s target audience I to some degree can understand it not having some of the same features to guide a user through getting connected to a hidden network while at the same time seeing this as an inconvenience.

Isn’t It Easier to Unhide the Network

No, at least not when you have no control over the network. There’s an argument to be made on why hiding a network is not an effective security action. Whether those arguments fail or make great points is irrelevant in environments where you personally have no control or influence on the network.

There Are Several Ways to Connect. Which Should I Use?

I found a few solutions to this problem. But I’m only presenting the one that I found to be satisficing.  The method requires that the IoT device be first connected to a wired network first.

On a computer (as in your laptop or desktop) that already has a connection to the wireless network export the wireless profile. Copy this to the the Windows 10 IoT device and the import the profile. Let’s talk about how to do each one of those steps.

Exporting Your Wireless Profile

On your computer that has a connection to the wireless network open a powershell instance.  use the following command to export your wireless profile.

netsh wlan export profile name=

Here substitute the name of your wireless profile in for the last parameter (without the brackets).  This will be the same name that shows up in the Windows Network settings for the network that you are connected to.  When you press enter netsh will create an XML file with the wireless profile. Take note of the location where it was saved.

Copying the Profile to the Windows 10 IoT Device

One of the convinent things about Windows 10 IoT core is it has many of the behaviours that the Windows Desktop has. This includes the ability to read and write from the file system over the network. Connect your Windows 10 IoT device to a wired network and take note of the IP address that is assigned to it. In the Windows File Explorer on your desktop enter the following

\\\c$

You will be prompted to enter the username/password of the machine. The user name is Administrator. The password in the past has defauled to p@ssw0rd. But you might have specified a different password at setup. Once authenticated you’ll see the file system for the device. Copy that XML file over.

Importing the Wireless Profile

Open a Powershell instance to the Raspberry Pi. The easiest way to do this is to use the Windows 10 IoT Dashboard. Under “My devices” you should see your device listed. Right-click on it and select “Launch PowerShell”.

IoTLaunchPowershell

Once in PowerShell navigate to the directory in which you saved the XML profile. Use netsh to import it.

netsh add profile filename=

After entering this command and pressing enter the device will now be aware of the network. From the UI on the device if you go into the Network settings you can now select that hidden network. It will prompt you for the password and you’ll be connected.

Related Affiliate Links

Windows 10 for the Internet of Thing, Book

Dragonboard 410c, A tiny board compatible with Windows 10 IoT with integrated GPS

Minnowboard Turbot, another Windows 10 IoT Compatible board

Raspberry Pi Starter Kit

TypeScript in Tizen

I was writing a program to run on my television and encountered a scenario that I’ve encountered many times before; an HTML enabled device supports a JavaScript standard that is older than the one that I would like to use. The easiest workaround for this is to use a tool that will compile from a more recent version of JavaScript (or something similar) back to the version that is supported by the hardware. This is something I’ve done when developing for BrightSign and other devices.

For targeting the Tizen based Television I decided that I would use TypeScript to accomplish this; in addition to getting access to some more recent features that can be found in JavaScript there’s we also get type checking.

A bit of work was required to get this working though. On my first attempt I tried includint the TypeScript files in the same folder as the project. This doesn’t work;when the project is being compiled the compiler will try to take these files and package them in the solution. This isn’t something that we want to happen. It’s necessary to have these files in a folder that is outside of the project folder to prevent this from happening. I moved the files and made a TypeScript configuration file that specified the destination to which I wanted the resulting JavaScript files moved.

{
  "compilerOptions": {
    /* Basic Options */
    "target": "es2015",
    "module": "commonjs",  
    "sourceMap": false,   
    "outDir": "../tizenWorkspace/projectName/js"
    "strict": true,                           
    "noImplicitAny": true,                 
  }
}

This almost works. The next problem encountered is that when there is a reference to anything on the tizen object the compiler will complain about it net having been declared. The tizen object, not being a web standard object, is not something that is recognized by the compiler. There are two ways to handle this project. A work around would be to declare the tizen object as being of type any. With this declaration the compiler will just ignore what ever we do with the object and not complain.

I made a TypeScript definition file named tizen.d.ts in which to place my definitions. TypeScript already has an understanding of the interface provided by the Window object. To augment this I declare another interface that will be merged with the understanding that TypeScript has and added a definition for the tizen member there.

declare	interface Window {  tizen:any }

That works, but that’s also eliminating some of the type safety features that that TypeScript has to offer. Instead of working around the problem I wanted to address it. I wanted to provide the type definitions for the Tizen object.

There’s a project called Definitely Typed in which contributors make TypeScript definitions that can be downloaded and shared to other developers that are targeting the same environment. At first glance there appears to be existing entries for targeting Tizen within the collection. But upon further inspection it turns out that the definitions that are there (at the time of the writing of this post) are for targeting a cross development tool that also supports Tizen. that’s not what I needed. Instead of relying on community provided definitions I’ll have to make my own. When I’m done though I may have a definition file that could be shared through Definitely Typed. Since that repository is constantly being updated I would encourage seeing what it has to offer before using the code that I provide here.

declare	interface Window {  tizen:ApplicationManager}

This is when I start my descent down the rabbit hole. To define the ApplicationManager interface that is implemented by the tizen object there are a number of other interfaces that must be defined. Those interfaces have dependencies on other interfaces.

The interfaces for the various objects are documented and can be found on a Tizen.org page. Browsing through it there are some types mentioned that ultimately are strings of some type of another. Within TypeScript we can make a declaration that is similar to a typedef for equating some custom type to another.

type ApplicationId = string;
type ApplicationContextId = string;
type PackageId = string;

There is also a frequently used callback type for successes and errors of callbacks. The links to the documentation for the functions’ call signatures are broken taking me to a 404 page. I was more generic with defining these in my type definitions until I can get the specifics of the actual accepted call signatures.

type SuccessCallback = (...args: any[]) => void;
type ErrorCallback = (...args: any[]) => void;

The rest of the definitions are interfaces and follow the same patterns. I’m showing a few of the interfaces closer to the root of the definitions.

declare	interface Window {  tizen:ApplicationManager}
declare var tizen:tizenInterface;

interface tizenInterface {
    application:ApplicationManager;
}

interface ApplicationManager { 
    getCurrentApplication():Application;

    kill(contextId:string,
              successCallback:SuccessCallback,
              errorCallback:ErrorCallback):void ;

    launch( id:string, //ApplicationId
                successCallback:SuccessCallback,
                errorCallback:ErrorCallback):void;
    launchAppControl(appControl:ApplicationControl,
                        id?:ApplicationId, //ApplicationId
                          successCallback?:SuccessCallback,
                          errorCallback?:ErrorCallback,
                          replyCallback?:ApplicationControlDataArrayReplyCallback):void ;
     findAppControl(appControl:ApplicationControl,
                        successCallback:FindAppControlSuccessCallback,
                        errorCallback:ErrorCallback):void;

    getAppsContext(successCallback:ApplicationContextArraySuccessCallback,
                        errorCallback:ErrorCallback):void ;
    getAppContext(contextId:string):ApplicationContext;
    getAppsInfo(successCallback:ApplicationInformationArraySuccessCallback,
                     errorCallback?:ErrorCallback):void;
    getAppInfo(id?:ApplicationId ):ApplicationInformation;
    getAppCerts(id?:ApplicationId ):Array;
    getAppSharedURI(id?:ApplicationId ):string;
    getAppMetaData(id?:ApplicationId ):Array;
    addAppInfoEventListener(eventCallback:ApplicationInformationEventCallback):number;
    removeAppInfoEventListener( watchId:number):void ;    
}

There are a lot more objects that could be defined for Tizen. If you’ve come along this article checkout the DefinitelyType archives first. If you don’t find Tizen devinitions there you can download the version of the video that I have from here.

What is .Net

.NetFramework

I have some .Net related content that I plan to post and thought that I would revisit this question.

It’s a question I find interesting in that the answer has changed slightly over the year. In the earliest years it was a branding for technologies that were not necessarily related to each other; Windows .NET Server and Windows .NET Messenger are two products that had the branding at one point. But let’s not walk down memory lane and jump straight into the answer.

.NET is still a branding but the technologies with the branding are related to each other. Microsoft uses the branding on their Common Language Runtime (CLR) products. That answer only has kicked the can down the road. What is the CLR?

The CLR is a virtual machine component. Executables targeting the CLR don’t necessarily contain any code that is native to the processor on which they are running (though it may contain native code, but let’s ignore that for a moment). CLR binaries can be distributed with no processor dependent executable code within them. At runtime when the code is being executed it is converted to machine code as needed. Because of this the same program can be run on machines that have different processor architectures. The computer on which a program is running needs to have the runtime that is specific to it’s architecture and operating system.

This system might sound familiar as modern Java does something similar. There was a time when Microsoft was invested in Java virtual machines and made the first Java runtime that compiled the Java binary to machine language. The entity that owned Java at the time (Sun) wasn’t happy about this and they took Microsoft to court for deviating from the standard of how Java virtual machines worked and for using the Internet as a method of distribution among other reasons. This disagreement might sound petty, and in part it was. But there were good reasons for their position that I’ll present in another post. But this interaction added weight to the argument that Microsoft should have their own virtual machine. They also made their own programming languages (C# and Visual Basic .NET) and a few CLRs for x86, x64, and for their mobile devices.

The CLR, also known as the .Net Framework has seen several updates over the years. Microsoft eventually decided to make the CLR open source. This contributed to another CLR implementation being created named Mono which allowed .Net Framework applications to run on Linux and Mac.

If you look up .Net now you’ll find a few .NET systems listed.

  • .Net Standard
  • .Net Core (2016)
  • .Net Framework (2002)
  • ASP.Net / ASP.Net Core

What are these?

.Net Standard is a specification of the set of APIs that are expected to be in all implementations of the .Net Framework. Think of this as analogous to an interface; .NET standard itself isn’t an implementation. If you make an application that sticks with these APIs then it will have a wide range of compatible targets.

For the .Net Framework only one version of the framework can be installed at a time. Microsoft generally kept backwards compatibility, but it wasn’t perfect. Since a system could only have one version of the Framework installed in corporations updating the Framework had to be a company level decision.

.Net Core was made to contain the most common features of the .Net Framework, but has a few new features installed. It was made with multiple operating systems in mind and multiple versions in mind.  A system can have multiple versions of .Net core installed and they can run side-by-side.  From hereon Microsoft will be putting efforts on improving .Net Core. The .Net Framework will continue to support the .Net Framework but don’t expect to see new features in it; the new features will be coming to .Net core. There are a lot of legacy functionality from the .Net Framework that did not get ported over to .Net core in the interest of performance and compatibility.

ASP (Active Server Pages) is the name for Microsoft’s Web development system. Some of the earlier versions used a language that was similar to Visual Basic (yuck). The first version of ASP that supported .NET was called ASP.NET. ASP.NET used the .Net runtime and the more recent version supports the .NET Core runtime.  Traditionally ASP pages were hosted within IIS (Internet Information Services), a Windows component for hosting web pages. Wit the modern versions while this is still an option ASP.NET pages can be hosted outside of IIS too.

If you are starting a new desktop .Net project and don’t know what version to use the safe choice will generally be .Net Core. In my opinion the best feature is its ability to run on multiple systems (Mac, Linux, Windows, and varios IoT devices including the Raspberry Pi).

 

Trying to learn C# and .Net Core? This is a book I would recomend.

 

Credit Cards and Magnetic Strips in the USA

Yes, in the USA we still use magnetic strip on credit cards.

I’ve had some interactions with some people online more than once when I found myself explaining this. One of the more recent incidents was about a phone that used magnets to keep something in place. I had commented that with such a phone I want to make sure that it didn’t get near my bank cards because of the magnets. I received responses of confusion from this. I’ll address some of the frequent responses.

  • Why does it matter if it gets near a magnet?
    – If a credit card is in contact with a magnet that is strong enough or for long enough then the data on the strip can be damaged making it unusable in magnetic strip readers.
  • Why not use contactless payment?
    Banks in the USA for the most part don’t issue contacless cards. I’ve checked for them and have found they do exists if you get a very “product” from that bank. For example if you have a card from Main Street Bank Visa (a bank name I’ve just made up) then it might not be contactless, but if you got the Main Street Bank Visa Barbie Edition (also a made up card) then it may have the contactless feature.
  • Why not use the Chip and Pin
    The chip isn’t accepted at some POS terminals. For credit cards in the USA we almost never use pins. For bank cards associated with a checking account though a PIN must be entered if the card is processed as a bank card instead of a credit card. High volume stores including fast food also prefer not to use the pin. The amount of money they can make is heavily dependent on how quickly they can complete the purchase process and they risk loosing money on missed sales than fraud at heavy times.

Typically, people in the USA that do have experience with contactless payment have it through Apple Pay or Samsung Pay. While Android Pay had been present years before it was hamstrung by three of the four nationwide phone carriers here. They decided to block Android pay and created their own payment system. Their system was called ISIS and based on the American Express SERVE cards. But things happened in the news and the ISIS name was no longer a name that was marketing friendly.  They changed the name to Softcard but the effort ultimately failed.

I myself was able to use Android Pay for some time since I had a Google phone that wasn’t distributed through the carriers. It worked fine until Apple Pay was released. When Apple Pay was released many major retailers decided to deactivate their contactless receivers. They wanted to have in on mobile app payments and they blocked such payments all together. Now, if I entered a store even if I could visually identify that a contactless payment terminal was present I didn’t know that it worked. At one point it failed more times than not and it was easier to just not bother with it.

Why not use the app that the retailer made instead of the phone payment app? There are two reasons. The first, is the retailers didn’t have such an app. They were starting development of their systems and blocked other contactless payments while they prepared their own. When such apps were available my personal motivation was that data breaches happen regularly and it is safer to minimize the number of accounts in which personal information appears. Even when an account is closed in the USA retailers may often retain records of the information; it is still vulnerable to a breach after an account with the retailer has been closed.

With that said, I’m hoping to see a shift in how credit cards are handled in the USA. There’s a high rate of credit card fraud in the USA and several times within a year my associates and I find that we must get a card replaced because of data breaches.

 

 

End of Linux on DeX Beta

Unfortunately, Samsung has recently announced the end of Linux on Dex support. The last time I mentioned LoD was when Samsung mentioned it was coming to more devices. A close associate recently acquired one of those devices to which support was coming. When I tried to get her sign up for the Beta I had found that there was no way to get her signed up. That was about two weeks before the Samsung announcement.

To summarize, Samsung stated what as one upgrades to Android 10 they would loose the Linux on DeX functionality; if someone wants to run a full Linux setup on their computer they will have to avoid upgrading. There was no statement on whether or not there will be anything to replace this functionality.

Personally I will miss this functionality. When I was finally able to access it I was able to leave my computer behind when I went on trips. While it wasn’t as fully ccapable as my laptop it supported enough functionality to be a secondary developerment solution; I could do enough things to respond to some unanticipated requests. I had access to GIT, Node, and various other development and command line tools include Visual Studio Code. Unless Google or Samsung plan to release a replacement this will be functionality lost with the next OS update.

In the mean time I’ll be looking back to the Chromebook. The Chromebook has some limited linux support that may be helpful. Though the last time I used it there was no where near as much functionality as LoD.

It will be missed. 😦

Creating a new Tizen Project for Samsung TVs

The objective of this entry while basic covers an easy mistake to make. It is a mistake that I have made. I’ve got a new Samsung Series 6 TV and I tried to deploy a new project to it. Errors were encountered, frustration levels were raised, but eventually I encountered success.

The Samsung TVs are more locked down than some of the other Tizen devices that I’ve worked with. The more recent ones are more locked down than some of the previous ones. When things go wrong this is what you might see.




The TV I am using runs version 4 of the Tizen operating system. I make a new Tizen project and select to create the new project from the TV templates choosing Tizen 4 as the platform.

TizenNewProject

Attempts to debug the project created from this template fail. I get an error message stating:

Launching [your app name here] has encountered a problem
closed
   closed
     closed

The terminal output isn’t of much help.

Launching the Tizen application...
# If you want to see the detailed information,
# please set the logging level to DEBUG in Preferences and check the log file in 'C:\tizen-studio-data\ide\logs/ide-20191006_014055.log'.

[Initializing the launch environment...]
RDS: Off
Target information: UN43NU6900
Application information: Id(07DOxO8iKR.SystemInfo3), Package Name(07DOxO8iKR), Project Name([your app name here])
Unexpected stop progress...
(0.337 sec)

So what gives?  There are two ways to address this that are essentially two paths to the same destination. The manual solution involves editing a couple of configuration options in the files config.xml and .tproject.

The file .tproject is not visible in the Tizen IDE. But you can still open it through file -> open. This file is an XML file. There is an element named that has a sub-element . I changed the value here to tv-samsung-540. The other change in config.xml is on an element of the form . This needs to be changed to .

Why are these changes necessary? I don’t have full confirmation on this, but I believe it has to do with differences between a generic Tizen device and Samsung Tizen devices. At the time of this writing I know of no physical implementations of any non-Samsung TV Tizen devices. But it does exist as a specification.

The other solution would be performed at the creation of the project. When creating a new project do not select from the TV project templates. Instead select the Custom project templates. Within these templates there is a TV template subtype. If you choose this project type then you will start off with the configuration files mentioned above having the values that are needed.

As the Tizen operating system and the development environment are updated year to year more readers will read this entry after a new Tizen version has been released than before. It is likely that the exact values that you include here will be different than what I have used. You may need to update the values accordingly. But hopefully this will point you in the right direction.

Windows 10 IoT Core Installer Blocked?

If you try to install Windows 10 IoT core from the installer chances are you wil get blocked with the following error.

Your administrator has blocked this application because it potentially poses 
a security risk to your computer."  and "Your security settings do not allow 
this application to be installed on your computer.

I ran into this recently when preparing a Raspberry Pi to run Windows 10 IoT Core. What gives? Well, that is due to a Windows Security Setting. The setting can be changed by editing the registry.  The registry key can be found at the following location

Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\Security\TrustManager\PromptingLevel

The keys inside of this path contain one key named Internet that is set to Disabled. Change it to Enabled. Then you should be able to perform the installation and then change the key back.

Raspberry Pi Starter Kit

Available on Amazon for free at the time of this writing

Using a Game Pad in the Browser

GameController

I was thinking about a video game from my childhood and how easy it would be to rebuild the game now.  Having decided to do this I had to decide what platform to target. I chose to make it a browser application. While the game could be played with a keyboard or mouse I wanted to use a game pad to play it the way it was meant to be played.

Game pads can be used in HTML. I’ve not seen many applications that actually take advantage of this, but as game streaming becomes more popular this is likely to change.  Let’s start with one of the simplest things we can do: detect what game pads are present. I’m going to share all of my code in TypeScript. TypeScript is basically JavaScript with type information added. It compiles down to JavaScript. The advantage of using it here is that you will know the data types of a variable or parameter instead of needing to analyze the code and make inferences.

Detecting a Game Pad

To start we want to ensure that our browser supports the game pad API. To do this just check whether or not the getGamepads function exist on the navigator object.

var isGamepadSupported:boolean = (navigator.getGamepads != null);

There are two ways that we can detect the presence of game pads. We can listen for the events that are fired when a controller is connected or disconnected, or we can just poll for their presence. Here’s a code sample that does just that. The TypeScript elements are highlighted in blue.

function handleGamePadConnected(e:GamepadEvent) {
     console.log('A game pad was connected', e);
}
function handleGamePadDisconnected(e:GamepadEvent) {
     console.log('A game pad was disconnected', e);
}
window.addEventListener("gamepadconnected", handleGamePadConnected);
window.addEventListener("gamepaddisconnected", handleGamePadDisconnected);

If we run this code within a page we can see the code get triggered by ensuring the browser has focus (a browser without focus will not receive game pad events). Connecting and disconnecting a controller (if USB) or turning a controller on and off (for Bluetooth) will cause the event to trigger. On some controllers you may have to press a button for it to fully connect.  The event passed to the handlers has a field named game pad that contains all of the data for the game pad being used. The other way to get this information is to try to read the game pad state Calling navigator.getGamepads() will return information on all the game pads connected to the system. A word of warning though, the value returned is an array that has some null elements in it. If you iterate through the array don’t assume that because you are reading a valid index that there is actually an object there.

This might sound weird at first, but imagine you are playing a four player game and player 2 decides to turn off his controller and leave. Rather than assign a new index to players 3 and 4 their controller indices will stay the same and the second element in the array will be null. To get the state of all of the game pads attached to the system use the following.

var controllerStates:Array<Gamepad> = navigator.getGamepads();

When I call this method I always receive an array with up to 4 elements in it, most of which are null.  If this function is being called in the game loop from one call to another one can see what controllers are present . Keep in mind that the object returned here is the state of the game pad, but it is not a reference to the game pad itself. If you hold onto the object it will not update as the state of the controller changes.

There are two properties on the GamePad object of most interest to us; the buttons property has the state of the buttons and the axes property contains the state of each axis for each directional control on the game pad.

Axes

A D-pad input on a controller has two axes that can range from -1 to 1. There will be a vertical axis with three possible values and a horizontal axis with three possible values. While one might expect the neutral position of the controller to be zero I’ve found that even with my digital controllers the neutral value is near-zero but not quite zero.  There is a range for which we could receive a non-zero value that we would need to treat as zero.

For the analog sticks the general range of values returned will also be between -1 and 1 for each axis, but there are values between this range that could also be returned depending on how far the directional stick has been used. When the stick isn’t being touched the axis will return a value that is near zero. Like the digital input there is a near zero range that should be treated as zero. Note that the controller interface may return more axes than actually exists on the controller.

Buttons

The buttons element of a GamePad object has a list of buttons. The collection may return more objects than there are physical buttons on the controller; these non-existing buttons will not change state. The GameButton Interface has three attributes.

  • pressed – will be true if the button is pressed, false otherwise.
  • touched – for controllers that can detect touch this attribute will be true if the button is touched, false otherwise
  • value – a value between 0 and 1 that tells how far the button is pressed or how hard it is pressed.

Even if you have a controller that doesn’t support touch or analog input the touched and values buttons will still update. If a button is pressed it can be inferred that it is being touched and the touched attribute updates accordingly. For a digital button the value attribute will only be 0 or 1 and no value between.

interface GamepadButton {
     readonly pressed:boolean;
     readonly touched:button;
     readonly value:number;
}

Other Attributes

There are some other attributes that can be found on the game pad that I haven’t discussed here. There GamePad object has an index attribute that will identify its own index in the array.  There is a string id field that gives a name identifying the controller. There is also a timestamp attribute that indicates the time at which a Gamepad object was grabbed.

interface GamePad {
     readonly id:String;
     readonly index:number;
     readonly connected:boolean;
     readonly timestamp:long;
     readonly mapping:GamepadMappingType;
     readonly axes:Array<number>;
     readonly buttons:Array<GamepadButton>;
}

 

Code Sample

This code sample here will read the states of up to 4 controllers and show their states on the screen. I’m using images to present the code here as I have found that WordPress will sometimes unescape HTML code and render it as HTML instead of text. But you download the code sample directly to view it.

 

gamePadHTMLCode

 

The file main.js here was compiled from a typescript file, main.ts. Execution in mail.ts starts after the document is loaded. The first method executed is named start. It adds some handlers for the game pad being connected or disconnected.  These handlers only print the event out to the console. We are more interested in what is running within the interval. In the interval the states of all of the game pads is retrieved and then we call a function (updateController) to display them on the screen.

 

startFunction

Update Controller will find the HTML block to which a specific controller index is assigned and get the represented button state and axes states updated. The functions updateAxes and updateButtons take care of the details of these. There are several other HTML elements needed for this that were not declared in the page. Instead they are being created as they are needed.

In updateAxes if the needed element doesn’t exists I create it and then I show the value of the axis.

gamepadUpdateAxes

 

The function updateButtons does pretty much the same thing. Only instead of a single element it is updating three.

gamePadUpdateButtons.png

Do you have a game controller and want to try things out? I’ve got it loaded on a web page that you can try. Get your computer and controller paired. Then see the demo run at http://j2i.net/apps/gamePadStatus/