Please Provide an Offline Mode | Opinion

With the apparent ubiquity of internet connection, more applications not only support online functionality, but require it. When perform an update of an application a month ago, I decided to keep an instance of the previous application in place. The new application, though it retained the old functionality in some sense, moved that functionality to a web service. As a frequent traveler, I’m no stranger to finding myself withing an internet connection. Whether it is on a plane with no or failed Internet, in some building that acts as a faraday cage, or on an Internet connection that blocks access to services that are not white-listed, I sometimes am in situations where there is no open data pathway between my computer and a necessary service. Those these scenarios might sound specific to me, there’s another scenario that may be more generally familiar to people. Many people have experienced impacts from Azure or AWS services going offline.


I understand there are some services for which there is no failure mode for being offline, such as an application used to order food or a ride which require establishing communication or coordination of parties, or a service that can’t be implemented locally due to licensing or computational restrictions. But there are some really bad examples of applications that don’t work offline. About any game published by “Netflix” is a great example. Connect to a network that blocks access to Netflix or lose connection, then you cannot play solitaire. As I write this, I just got done making a stub implementation of a service just so that I could run a developer tool offline.


One of the earliest encounters I remember for failed connection assumptions was when Microsoft was announcing the Xbox One before release as an always-online console that had no need for an optical drive. That announcement wasn’t welcomed. There were groups that didn’t welcome that for varied reasons. Some were soldiers, who knew this translated into just not being about to use the Xbox One in certain locations where their connections were limited for security reasons. Microsoft also figured that people didn’t need large storage. If the drive got full, the console could just erase a game and download it again when the user wanted to play. Of course, this assumed that the user could have establish a connection of sufficient bandwidth. It also overlooked that not everyone has unlimited internet connections. I’m in a market for which there are 2 Internet providers (AT&T and Xfinity/Comcast) that have monthly data quotas. Going over the quotas results in overage fees. There was a game for which I had to pause the download and wait a few days to finish because the download of that one game alone would use over a half of the transfer bytes that I was allowed in a month. While outcries against it were heard, I think they primarily only delayed the transition and did not prevent it.
When online applications were becoming more common, there was consideration given to offline modes. Microsoft even had a publication called “Smart Clients::Patterns and Practices” that had design patterns for making applications that were resilient to unstable and failed connections. Though this publication contained older wisdom, I don’t think that it is completely outdated. In the book, , connections are treated as less than 100% reliable. Applications were treated as “occasionally connected,” working from cached data and queuing request when a service isn’t available, sending information to a service and getting responses when they are available. Microsoft Outlook was one of the examples of an applications that was made with this philosophy.


In making applications for client, connection failure modes are tested. The most basic of which is simply turning off the Internet connection and seeing whether the application is able to maintain some subset of its functionality. For example, a digital signage application that shows performance schedules has those schedules stored locally, updating its local copy with information from a CMS on some interval. If the connection to the CMS fails, it still has a local copy of the schedule to present. When a connection fails, reservation functionality might be replaced with a screen directing someone to the location of a ticketing booth or a phone number to get more information. The application doesn’t become completely non-functional all-together. Functionality degrades with the loss of a connection, but it doesn’t flat line.


Whether it is a game, a utility, or something else, I wish that some bit of offline functionality would be retained. I fear this will not happen. Between modern trends and the availability of computer RAM, solid state storage, and GPUs becoming more constrained as the needs of data centers and AI entities consume them, access to equipment is progressing in the direction of being scarce with laptops taking on more features one would expect from a netbook. Functionality being hosted in the cloud and streamed to a user has become one of the solutions for the lack of access to hardware for affordable prices.


I don’t imagine that this will change, but I wish some consideration were given to to keeping an application at least partially functional when a connection cannot be established.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

Use Convolutions, Not Randomness | Opinion

Random functions can be useful. There is even specialized hardware in many processors for generating random numbers. That said, there are some situations where the requirements as written from a user perspective call for randomness, but I’ve learned to have an inclination to avoid it. Thinking about projects I’ve worked on over the past decade, there are instances where the requirements called for a program to make a random selection from a pool of possible selections and present it to the user. There have been times when I’ve done exactly that, only to have a bug written up and sent back to me. It is by way of these bug reports that I’ve learned new previously unstated requirements.

One bug report was about having received the same selection twice in a row. “Well, yes. It’s random, sometimes that can happen.” A new request was made for this to not happen. I added logic to prevent this, knowing that this made the selection less random; if the previous selection is informative to something about the following selection, then it is no longer completely random. In another instance, I received a bug report stating that some item or another wasn’t showing up frequently enough or something showed up too frequently. Now, I needed to keep track of the frequencies at which selections had been presented and adjust the selection process to give a higher chance to items that had a lower frequency of having been shown.

The most egregious problem I encountered was when the software was really misbehaving. For me, the problem was that the misbehaviour never occurred. I tried letting the program make a hundred cycles of selecting items, and it all worked fine. On the machine that QA was using, the problem was infrequent, but did occur. Because of the inconsistency, I decided to rip out the instances of the random function being used to make a selection and replace it with a routine that would shuffle the selections around before making a selection. The routine was deterministic, not random. But it convoluted the selection process enough to make the process perceivably random and not generally predictable to a casual observer. Most importantly of all, now the QA machine and my development machine were making the same selections in the same order. Through this, we found that the bug was only observable when a specific selection followed some other specific selection. Something that only had a 2% chance of happening.

Now, when I’m asked to make a random selection, I consider does it really need to be random, or only perceivably random. Some routines, such as the creation of identifiers such as GUIDs, do need to be random. But much of the time it doesn’t.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

Mixing Rendering Technologies on Windows | DirectX

Windows has many methods of rendering images to the screens. Those include GDI, Direct3D, Direct2D, DirectWrite, XAML, Windows Forms, Win2D, and WinUI. Windows 10 brought support for mixing some of these rendering technologies. through XAML islands. There are other ways of mixing the rendering technologies. I was interested in mixing Direct3D with GDI and Direct2D. The rendering technologies were going to be used to generate what can be viewed as different layers of my UI. There would be a background visual, a 3D image on top of that, and data-overlays and command UI elements above that. How might one mix these together?

Because the UI elements were going to be organized in layers they are a great candidate for DirectComposition. DirectComposition takes different UI elements and handles combining them. DirectComposition works with graphical objects that implement the IDXGISurface interface. It can also show SwapChains. These are used in Direct3D and Direct2D for efficiently rendering a scene and then swapping the currently displayed scene with the new one. For other objects, one can render to IDCompositionSurface objects and use those.

Base Classes

I don’t speak in this post on how I created a Win32 application window. If you want details on that, see a post I made a few years ago about a C++ application base class that I made. Though this is not the same version of the component that I spoke about there, the two are still similar enough to have value as an explanation.

Patterns Used in the Example Code

In the example code, there are some patterns that are used throughout that I explain here.

do {} while(false);

There are several blocks of code that are wrapped in a do { } while(false); control structure. While we usually use this structure for loops, I’m using it for something else. When a break; statement is executed, the flow of execution will exit the current block. A majority of the calls that I make for DirectComposition and some other APIs return HRESULT values that indicate success or failures. If a failure occurs, I need for the code to exit. This could occur through a goto: statement, by throwing an exception, or by calling the break; statement while within a do { } while(false) block.

Every time a HRESULT value is returned, I must check the return value for success. If it is a failure code, then I must execute a break;. To simplify performing the comparison, I’ve defined a macro and wrap the function call with it.

#define BOF(value)  if (FAILED(value)) break;

BOF(hr = dcompDevice->CreateVisual(&dcompRootVisual));

I also use the Windows Runtime C++ Template Library. Specifically, I use Microsoft::WRL::ComPtr. The ComPtr class is like the std::smart_ptr class. It is another version of a smart pointer that does reference counting to keep an object alive until there are no references to it. The ComPtr class is specifically for COM objects. The details of COM are not important here beyond knowing a couple of things. One is that COM objects implement one or more interface and that when there are no references to a COM object it should be deleted and the resources reclaimed. When all COM objects for an instance of an object go out of scope, the ComPtr object automatically decrements the references counts and trigger the cleanup operation. Many of the DirectComposition and Direct3D objects implement COM interfaces.

Creating a DirectComposition application

Since DirectComposition is a native API, an easy way to create such an application is to create a Win32 application that has DirectComposition objects. The Win32 graphical APIs will still work in this application. The developer can select whether the plain Win32 UI objects will appear on top of behind the DirectComposition surface. The DirectComposition API manages resources on the graphics card. Some of the objects used are objects from Direct3D. To initialize DirectComposition, we start by initializing Direct3D. We make a call to D3D11CreateDevice() to create a Direct 3D device. We request the IDXGIDevice interface from the D3DDevice and pass it to a call to DCompositionCreate() to create an IDCompositionDevice object.

ComPtr<ID3D11Device4>		 d3dDevice;
ComPtr< ID3D11DeviceContext> d3dContext;
ComPtr<IDXGIDevice4>		 dxgiDevice;
ComPtr<IDCompositionDevice>  dcompDevice;


DWORD createDeviceFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
if (true)
{
	createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
}
BOF(hr = D3D11CreateDevice(
	nullptr,
	D3D_DRIVER_TYPE_HARDWARE,
	nullptr,
	createDeviceFlags,
	nullptr,
	0,
	D3D11_SDK_VERSION,
	&deviceTemp,
	&featureSupportLevel,
	&d3dContext
));
BOF(hr = deviceTemp.As(&d3dDevice));
BOF(hr = d3dDevice.As(&dxgiDevice));
BOF(hr = DCompositionCreateDevice(
	dxgiDevice.Get(),
	IID_PPV_ARGS(&dcompDevice)
));

We use the IDCompositionDevice object to create a target. The function IDCompositionDevice::CreateTargetForHwnd function takes a handle to a window that show the objects. The first parameter is the handle to the window. The second parameter is a BOOL that specifies if the DirectComposition content should show up on top of (TRUE) or behind (FALSE) other graphical objects in the application window. The third parameter is an out parameter that gives us the pointer to an IDirectComposition object. Most of our DirectComposition function calls will occur through this object or through objects that this object made.

BOF(hr = dcompDevice->CreateTargetForHwnd(
	_hWnd,
	TRUE,
	&dcompTarget
));
BOF(hr = dcompDevice->CreateVisual(&dcompRootVisual));

Visuals and Surfaces

Elements managed by DirectComposition are arranged hierarchically. A IDCompositionVisual object will be at the root of this hierarchy (hereon referred to simply as a “visual”). Visuals can contain both a IDCompositionSurface (hereon “surface” or “content”) to display and several other visuals. Each visual can also have transformations applied to it, such as scaling, rotating, skewing, of offsetting. These transformations affect both the object to which they are directly applied and to child objects. Scaling a root visual to 25% of its original size will also result in its child objects being reduced to 25%. Each one of the child visuals may also have its own transformations that are also applied.

Visuals

Visuals are created through IDCompositionDevice::CreateVisual. The function accepts only one parameter. That parameter is a pointer to an object that will receive the pointer to the new visual. Child visuals are added to a visual through IDCompositionVisual::AddVisual(). If we want to apply a transformation to a visual, we can create a transformation object through functions on IDCompositionDevice

  • IDCompositionDevice::CreateScaleTransform – For making surface larger or smaller
  • IDCompositionDevice::CreateTranslateTransform – For moving suface along X or Y axes
  • IDCompositionDevice::CreateRotateTransform – for rotating a surface
  • IDCompositionDevice::CreateSkewTransform
  • IDCompositionDevice::CreateMatrixTransform – For creating a custom transform from your own calculations
  • IDCompositionDevice::CreateRectangleClip – For reducing what is displayed about a surface to a rectangular subregion

Once a transform is created, it can be added to a visual with IDCompositionVisual::SetTransform().

Surfaces

The surfaces required a bit more effort than the visuals to figure out. While it is true that IDXGISurface obejcts can be used as surfaces, there are some more specifications that were not immediately obvious to me from the documentation. That an object implements IDXGISurface does not alone inform that the object can be used. For displaying images from the file system, I used the WIC (Windows Imaging Component) library to convert a file stream to an in-memory bitmap. The bitmap is then rendered to an IDCompositionSurface. Though IDCompositionSurface objects may support different pixel formats, I suggest using DXGI_FORMAT_B8G8R8A8_UNORM with DXGI_ALPHA_MODE_PREMULTIPLIED. We can then call BeginDraw() on the surface to begin painting on it. We BitBlt the image to the suface, call EndDraw() on it, and it is now ready to be used as a surface for DirectComposition.

HBITMAP hBitmap = NULL, hBitmapOld = NULL;
UINT width = 0;
UINT height = 0;
POINT pointOffset = { 0, 0 };
POINT p{ 0,0 };
ComPtr< IDCompositionSurface> surface;

CreateHBitmapFromFile(L"Assets\\background.png", &hBitmap, width, height);
BOF(hr = dcompDevice->CreateSurface(width, height, DXGI_FORMAT_B8G8R8A8_UNORM, DXGI_ALPHA_MODE_PREMULTIPLIED, &surface));
BOF(hr = surface->BeginDraw(nullptr, __uuidof(IDXGISurface1), &drawToTexture, &p));
HDC hSurfaceDC, hBitmapDC;
BOF(hr = drawToTexture->GetDC(FALSE, &hSurfaceDC));
hBitmapDC = CreateCompatibleDC(hSurfaceDC);
hBitmapOld = (HBITMAP)SelectObject(hBitmapDC, hBitmap);
BitBlt(hSurfaceDC, pointOffset.x, pointOffset.y,
	width, height, hBitmapDC, 0, 0, SRCCOPY);
if (hBitmapOld)
{
	SelectObject(hBitmapDC, hBitmapOld);
}
DeleteDC(hBitmapDC);
BOF(hr = surface->EndDraw());

SwapChains

Though D3D11Texture2D objects implement the IDXGISurface interface, I could not directly bind them to DirectComposition. However, DirectComposition does accept SwapChain objects and I had success with binding to those. For a SwapChain to work as an object consumed by DirectComposition, there are some options that must be set for it. We create a SwapChain by initializing a SWAP_CHAIN_DESC structure specifying the options and dimensions. That object is passed as an argument to IDXGIFactory::CreateSwapChainForComposition(). If you have used Direct2D/Direct3D, you may be familiar with CreateSwapChainForHwnd(). While CreateSwapChainForHwnd() creates a swap chain that is bound to a displayable window, SwapChains created by CreateSwapChainForComposition() are bound to an object that is not automatically visible on the screen.

For the SWAP_CHAIN_DESC options, we must the .BufferUsage member to DXGI_USAGE_RENDER_TARGET_OUTPUT. The pixel format, specified through the .Format member, should be set to a compatible pixel format for out composition. I would once again suggest using DXGI_FORMAT_B8G8R8A8_UNORM.

HRESULT hr;
this->device = device;
this->context = context;
ComPtr<IDXGIDevice1> dxgiDevice;
ComPtr<IDXGIAdapter> dxgiAdapter;
hr = device.As(&dxgiDevice);
hr = dxgiDevice->GetAdapter(&dxgiAdapter);
ComPtr<IDXGIFactory2> dxgiFactory;
hr = dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory));
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory));
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {};
swapChainDesc.Width = 2160;
swapChainDesc.Height = 3840;
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDesc.Stereo = FALSE;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2;
swapChainDesc.Scaling = DXGI_SCALING_STRETCH;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;
swapChainDesc.AlphaMode = DXGI_ALPHA_MODE_PREMULTIPLIED;

hr = dxgiFactory->CreateSwapChainForComposition(
   device.Get(),
   &swapChainDesc,
   nullptr,
   &swapChain
);

Since this is for a test, I don’t need to render anything complex. Rendering a solid color to the buffer is sufficient.

r = swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), &backBuffer);
hr = device->CreateRenderTargetView(backBuffer.Get(), nullptr, &renderTargetView);
ID3D11RenderTargetView* rtv = renderTargetView.Get();
context->OMSetRenderTargets(1, &rtv, nullptr);
D3D11_VIEWPORT vp = {};
vp.Width = 2160.0f;
vp.Height = 3840.0f;
vp.MinDepth = 0.0f;
vp.MaxDepth = 1.0f;
context->RSSetViewports(1, &vp);
float clearColor[4] = { 0.2f, 0.4f, 0.8f, 1.0f };
context->ClearRenderTargetView(renderTargetView.Get(), clearColor);
//context->ClearDepthStencilView(depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
hr = swapChain->Present(1, 0);

Once something is rendered to the SwapChain, we can bind it directly to a composition.

ComPtr<IDCompositionVisual> d3dVisual;
BOF(hr = dcompDevice->CreateVisual(&d3dVisual));
BOF(hr = d3dVisual->SetContent(displaySwapChain.Get()));

Transformations

Transformations are one of the main points of DirectComposition. Through transformations and the visual hierarchy expressed through the connections between IDCompositionVisuals a and IDCompositionSurfaces, our various visual objects are arranged, scaled, rotated, and blended together. Despite their importance, I won’t cover them much here beyond what is necessary to show that I’m combining graphics from two different rendering systems. The renders that I’m passing to DirectComposition are 4K portrait mode. I want DirectComposition to place the visuals side-by-side. But I also want the visuals to be scaled down to 25% for the sake of having them fit on my laptop screen. To scale down to 25%, I create a scale transform, assign the x and y axis scale factors, and then attach the transform to the visual.

ComPtr<IDCompositionScaleTransform> scaleTransform;
BOF(hr = dcompDevice->CreateScaleTransform(&scaleTransform));
scaleTransform->SetScaleX(0.25);
scaleTransform->SetScaleY(0.25);
dcompRootVisual->SetTransform(scaleTransform.Get());

Since I am applying this scale to the root visual, it will affect all child visuals. Both of my visuals will be scaled. To position them side-by-side, I set the X position of one image to 0px and the next to 2200px.

BOF(hr = d3dVisual->SetContent(displaySwapChain.Get()));
d3dVisual->SetOffsetX(2200);
d3dVisual->SetOffsetY(100);

The transforms provided by DirectComposition are the following.

For the prototype that I was working on, this was enough information for me to mix the 2D and 3D rendering technologies to make a representation of an application with fluid animation. Whether these rendering technologies are used in the final version is yet to be known. I might use WinUI (which gives higher level access to some of these features) or ThreeJS with HTML. The decision remains to be made. In any case, I enjoyed taking a look at this approach.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net