Where is D3DX12.h?

Every so often, I may have to create a new DirectX 12 program. The DirectX headers and libraries are part of the Windows headers and are generally present with the installation of the desktop development components of Visual Studio. There are some common or popular DirectX 12 related libraries that are not part of that. D3Dx12.h is one such header. I sometimes forget that this isn’t part of the Windows headers. I’m making this post for myself (should I not immediately recall where to find it). This header and other such helpers can be found on GitHub in the microsoft repository DirectX-Headers, found here.

Once you clone the repository, you’ll need to update your project to look in the folder. In Visual-Studio, right-click on your project and select “Properties.” Under “C/C++” select “General” and then “Additional Include Directories.” For all project configurations you will want to add the path to the DirectX-Headers\include\directx folder.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

uConsole::The Best Pi Mini Console You Won’t Have Any Time Soon

I often carry a Pi with me when I go out to deployments. They have the tools that I need for certain diagnostic purposes and are easy to travel with. Usually, I need to just bring a keyboard and network cable along and it is ready to be used. The latest Pi based system that I got my hands on is the uConsole from Clockwork Pi. I’ve used another device from Clockwork Pi known is the PicoCalc. It is a Raspberry Pi Pico based graphics calculator. I saw the uConsole as being a Pi based version of the same thing and ordered one. Clockwork Pi takes orders and manufactures and ships the item in batches. This isn’t a product that ships as soon as you order it. Prepare to wait for a devent amount of time. The two PicoCalcs that I ordered both took 4 months from order date to delivery. I ordered a uConsole in July of 2025 and as of February 2026, it hasn’t shipped yet. That’s why i say you won’t be getting one any time soon.

There are people and companies that already have their hands on one and they will sell theirs for a modest to insane markup. The expediency comes at a cost. I managed to purchase one and get it shipped to the USA from a third-party. Delivery of my original order is still pending.

The kit requires assembly. It only takes a few minutes to put it together. The kit is also very adaptable with products from third parties, allowing you to add USB ports, a cellular modem, software defined radio, LoRa, or an M.2 drive. With the base-level product you have a Pi running in an all-metal case. You will need to bring your own pair of 18650 batteries for it. It also comes with a 4 gig SD card with the operating system on it. I encourage tossing that and using a card that has at least 64 gigs on it. The keyboard for the device reminds me of the HP48G or Blackberry keyboard (both of these are old devices, but there are not many generally well-known devices with keyboards to compare it to). The keyboard is backlit.

The side of the uConsole showing the available portst

On the side are a USB-A port, a USB-C port (for charging), a micro-HDMI port, and a 3.5mm audio jack. Opposite to these ports is an insert to cover an expansion port area. If you purchase one of the expansion circuit boards, your additional ports (more USB, ethernet, so on) will be exposed on this side of the of the device. The back of the device has a kickstand that will let it lay at an angle when you set it down on a flat surface.

As mentioned, you will not want to use the 4gig microSD card that comes with it. When you write the operating system image to a larger card, instead of writing the standard Pi image, you will want to write a build that already has the necessary drivers installed. OS images and drivers can be found on the Clockwork Pi GitHub page at https://github.com/clockworkpi/uConsole or in a Google Drive Folder provided from them.

Praises

The things I like most about the uConsole include that it is a small, portable device. It is too big to fit in your pocket, but small enough to fit in a carry on.

The Keyboard

I appreciate that it has a keyboard and a screen, making it able to be a stand-alone unit without need for other accessories for basic operations. The backlit keyboard was a bit of an extra surprise, though it is backlit with flaws.

Expandable

Through third-party support, the device is customizable. making it more flexible.

Criticisms

The Trackball (Update)

Update: I am leaving my initial complaints in place since they are descriptive of the out-of-box experience. When I was preparing a new SD card, I saw there was a new build of the OS available. With this new build, the mouse movement is a lot better and mostly notifies my complaint.

Original:The top criticism I have for this device is the track ball. Despite all my praises for the device, my initial experience with the trackball was garbage. From looking at messages in the forum, it is hit-or-miss on whether you get a unit with a good trackball. There are a few messages in the Clockwork Pi forum about trackball issues. Sometimes you have to roll it for what feels like forever to move the mouse cursor from one edge of the screen to the other. People that have had the issue have been able to correct it by installing a third-party trackball (I’ve got one on order).

Ordering Wait

After you order, the wait feels like it takes forever for the device to come! I understand this is the nature of devices that are manufactured after they are ordered. I have to call this out since this is something that might be unacceptable to others. Especially if you live in an apartment. Someone on a 1 year least could find themself living in a new address before their order is fulfilled.

Keyboard Backlight

I like that it has a keyboard backlight, but the back lighting is not at all even.

The WiFi antenna that comes with the device isn’t that great. But I have plenty of antennas for Pis and the uConsole has an insert that can be used so that there is a place to attach the antenna.

Do I Suggest Others Get One?

Generally no, bit that is because of the wait time for delivery. If that gets resolved then my stance will change. While I like the keyboard, I’ve seen that some others criticise it. It is likely that since I grew up using devices that had similar keyboard that I may be more adapted to use it. I would like to say that someone should try the keyboard themselves to see how they like the feel before making the purchase. But that may be hard to do unless someone knows a person that already has one.

Next Up: Expansion

I’m waiting for some upgrades for my uConsole to arrive. When they do, I’ll be writing about the process here.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

ย Please Provide an Offline Mode | Opinion

With the apparent ubiquity of internet connection, more applications not only support online functionality, but require it. When perform an update of an application a month ago, I decided to keep an instance of the previous application in place. The new application, though it retained the old functionality in some sense, moved that functionality to a web service. As a frequent traveler, I’m no stranger to finding myself withing an internet connection. Whether it is on a plane with no or failed Internet, in some building that acts as a faraday cage, or on an Internet connection that blocks access to services that are not white-listed, I sometimes am in situations where there is no open data pathway between my computer and a necessary service. Those these scenarios might sound specific to me, there’s another scenario that may be more generally familiar to people. Many people have experienced impacts from Azure or AWS services going offline.


I understand there are some services for which there is no failure mode for being offline, such as an application used to order food or a ride which require establishing communication or coordination of parties, or a service that can’t be implemented locally due to licensing or computational restrictions. But there are some really bad examples of applications that don’t work offline. About any game published by “Netflix” is a great example. Connect to a network that blocks access to Netflix or lose connection, then you cannot play solitaire. As I write this, I just got done making a stub implementation of a service just so that I could run a developer tool offline.


One of the earliest encounters I remember for failed connection assumptions was when Microsoft was announcing the Xbox One before release as an always-online console that had no need for an optical drive. That announcement wasn’t welcomed. There were groups that didn’t welcome that for varied reasons. Some were soldiers, who knew this translated into just not being about to use the Xbox One in certain locations where their connections were limited for security reasons. Microsoft also figured that people didn’t need large storage. If the drive got full, the console could just erase a game and download it again when the user wanted to play. Of course, this assumed that the user could have establish a connection of sufficient bandwidth. It also overlooked that not everyone has unlimited internet connections. I’m in a market for which there are 2 Internet providers (AT&T and Xfinity/Comcast) that have monthly data quotas. Going over the quotas results in overage fees. There was a game for which I had to pause the download and wait a few days to finish because the download of that one game alone would use over a half of the transfer bytes that I was allowed in a month. While outcries against it were heard, I think they primarily only delayed the transition and did not prevent it.
When online applications were becoming more common, there was consideration given to offline modes. Microsoft even had a publication called “Smart Clients::Patterns and Practices” that had design patterns for making applications that were resilient to unstable and failed connections. Though this publication contained older wisdom, I don’t think that it is completely outdated. In the book, , connections are treated as less than 100% reliable. Applications were treated as “occasionally connected,” working from cached data and queuing request when a service isn’t available, sending information to a service and getting responses when they are available. Microsoft Outlook was one of the examples of an applications that was made with this philosophy.


In making applications for client, connection failure modes are tested. The most basic of which is simply turning off the Internet connection and seeing whether the application is able to maintain some subset of its functionality. For example, a digital signage application that shows performance schedules has those schedules stored locally, updating its local copy with information from a CMS on some interval. If the connection to the CMS fails, it still has a local copy of the schedule to present. When a connection fails, reservation functionality might be replaced with a screen directing someone to the location of a ticketing booth or a phone number to get more information. The application doesn’t become completely non-functional all-together. Functionality degrades with the loss of a connection, but it doesn’t flat line.


Whether it is a game, a utility, or something else, I wish that some bit of offline functionality would be retained. I fear this will not happen. Between modern trends and the availability of computer RAM, solid state storage, and GPUs becoming more constrained as the needs of data centers and AI entities consume them, access to equipment is progressing in the direction of being scarce with laptops taking on more features one would expect from a netbook. Functionality being hosted in the cloud and streamed to a user has become one of the solutions for the lack of access to hardware for affordable prices.


I don’t imagine that this will change, but I wish some consideration were given to to keeping an application at least partially functional when a connection cannot be established.


Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Bluesky:ย @j2i.net

Use Convolutions, Not Randomness | Opinion

Random functions can be useful. There is even specialized hardware in many processors for generating random numbers. That said, there are some situations where the requirements as written from a user perspective call for randomness, but I’ve learned to have an inclination to avoid it. Thinking about projects I’ve worked on over the past decade, there are instances where the requirements called for a program to make a random selection from a pool of possible selections and present it to the user. There have been times when I’ve done exactly that, only to have a bug written up and sent back to me. It is by way of these bug reports that I’ve learned new previously unstated requirements.

One bug report was about having received the same selection twice in a row. “Well, yes. It’s random, sometimes that can happen.” A new request was made for this to not happen. I added logic to prevent this, knowing that this made the selection less random; if the previous selection is informative to something about the following selection, then it is no longer completely random. In another instance, I received a bug report stating that some item or another wasn’t showing up frequently enough or something showed up too frequently. Now, I needed to keep track of the frequencies at which selections had been presented and adjust the selection process to give a higher chance to items that had a lower frequency of having been shown.

The most egregious problem I encountered was when the software was really misbehaving. For me, the problem was that the misbehaviour never occurred. I tried letting the program make a hundred cycles of selecting items, and it all worked fine. On the machine that QA was using, the problem was infrequent, but did occur. Because of the inconsistency, I decided to rip out the instances of the random function being used to make a selection and replace it with a routine that would shuffle the selections around before making a selection. The routine was deterministic, not random. But it convoluted the selection process enough to make the process perceivably random and not generally predictable to a casual observer. Most importantly of all, now the QA machine and my development machine were making the same selections in the same order. Through this, we found that the bug was only observable when a specific selection followed some other specific selection. Something that only had a 2% chance of happening.

Now, when I’m asked to make a random selection, I consider does it really need to be random, or only perceivably random. Some routines, such as the creation of identifiers such as GUIDs, do need to be random. But much of the time it doesn’t.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

Mixing Rendering Technologies on Windows | DirectX

Windows has many methods of rendering images to the screens. Those include GDI, Direct3D, Direct2D, DirectWrite, XAML, Windows Forms, Win2D, and WinUI. Windows 10 brought support for mixing some of these rendering technologies. through XAML islands. There are other ways of mixing the rendering technologies. I was interested in mixing Direct3D with GDI and Direct2D. The rendering technologies were going to be used to generate what can be viewed as different layers of my UI. There would be a background visual, a 3D image on top of that, and data-overlays and command UI elements above that. How might one mix these together?

Because the UI elements were going to be organized in layers they are a great candidate for DirectComposition. DirectComposition takes different UI elements and handles combining them. DirectComposition works with graphical objects that implement the IDXGISurface interface. It can also show SwapChains. These are used in Direct3D and Direct2D for efficiently rendering a scene and then swapping the currently displayed scene with the new one. For other objects, one can render to IDCompositionSurface objects and use those.

Base Classes

I don’t speak in this post on how I created a Win32 application window. If you want details on that, see a post I made a few years ago about a C++ application base class that I made. Though this is not the same version of the component that I spoke about there, the two are still similar enough to have value as an explanation.

Patterns Used in the Example Code

In the example code, there are some patterns that are used throughout that I explain here.

do {} while(false);

There are several blocks of code that are wrapped in a do { } while(false); control structure. While we usually use this structure for loops, I’m using it for something else. When a break; statement is executed, the flow of execution will exit the current block. A majority of the calls that I make for DirectComposition and some other APIs return HRESULT values that indicate success or failures. If a failure occurs, I need for the code to exit. This could occur through a goto: statement, by throwing an exception, or by calling the break; statement while within a do { } while(false) block.

Every time a HRESULT value is returned, I must check the return value for success. If it is a failure code, then I must execute a break;. To simplify performing the comparison, I’ve defined a macro and wrap the function call with it.

#define BOF(value)  if (FAILED(value)) break;

BOF(hr = dcompDevice->CreateVisual(&dcompRootVisual));

I also use the Windows Runtime C++ Template Library. Specifically, I use Microsoft::WRL::ComPtr. The ComPtr class is like the std::smart_ptr class. It is another version of a smart pointer that does reference counting to keep an object alive until there are no references to it. The ComPtr class is specifically for COM objects. The details of COM are not important here beyond knowing a couple of things. One is that COM objects implement one or more interface and that when there are no references to a COM object it should be deleted and the resources reclaimed. When all COM objects for an instance of an object go out of scope, the ComPtr object automatically decrements the references counts and trigger the cleanup operation. Many of the DirectComposition and Direct3D objects implement COM interfaces.

Creating a DirectComposition application

Since DirectComposition is a native API, an easy way to create such an application is to create a Win32 application that has DirectComposition objects. The Win32 graphical APIs will still work in this application. The developer can select whether the plain Win32 UI objects will appear on top of behind the DirectComposition surface. The DirectComposition API manages resources on the graphics card. Some of the objects used are objects from Direct3D. To initialize DirectComposition, we start by initializing Direct3D. We make a call to D3D11CreateDevice() to create a Direct 3D device. We request the IDXGIDevice interface from the D3DDevice and pass it to a call to DCompositionCreate() to create an IDCompositionDevice object.

ComPtr<ID3D11Device4>		 d3dDevice;
ComPtr< ID3D11DeviceContext> d3dContext;
ComPtr<IDXGIDevice4>		 dxgiDevice;
ComPtr<IDCompositionDevice>  dcompDevice;


DWORD createDeviceFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
if (true)
{
	createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
}
BOF(hr = D3D11CreateDevice(
	nullptr,
	D3D_DRIVER_TYPE_HARDWARE,
	nullptr,
	createDeviceFlags,
	nullptr,
	0,
	D3D11_SDK_VERSION,
	&deviceTemp,
	&featureSupportLevel,
	&d3dContext
));
BOF(hr = deviceTemp.As(&d3dDevice));
BOF(hr = d3dDevice.As(&dxgiDevice));
BOF(hr = DCompositionCreateDevice(
	dxgiDevice.Get(),
	IID_PPV_ARGS(&dcompDevice)
));

We use the IDCompositionDevice object to create a target. The function IDCompositionDevice::CreateTargetForHwnd function takes a handle to a window that show the objects. The first parameter is the handle to the window. The second parameter is a BOOL that specifies if the DirectComposition content should show up on top of (TRUE) or behind (FALSE) other graphical objects in the application window. The third parameter is an out parameter that gives us the pointer to an IDirectComposition object. Most of our DirectComposition function calls will occur through this object or through objects that this object made.

BOF(hr = dcompDevice->CreateTargetForHwnd(
	_hWnd,
	TRUE,
	&dcompTarget
));
BOF(hr = dcompDevice->CreateVisual(&dcompRootVisual));

Visuals and Surfaces

Elements managed by DirectComposition are arranged hierarchically. A IDCompositionVisual object will be at the root of this hierarchy (hereon referred to simply as a “visual”). Visuals can contain both a IDCompositionSurface (hereon “surface” or “content”) to display and several other visuals. Each visual can also have transformations applied to it, such as scaling, rotating, skewing, of offsetting. These transformations affect both the object to which they are directly applied and to child objects. Scaling a root visual to 25% of its original size will also result in its child objects being reduced to 25%. Each one of the child visuals may also have its own transformations that are also applied.

Visuals

Visuals are created through IDCompositionDevice::CreateVisual. The function accepts only one parameter. That parameter is a pointer to an object that will receive the pointer to the new visual. Child visuals are added to a visual through IDCompositionVisual::AddVisual(). If we want to apply a transformation to a visual, we can create a transformation object through functions on IDCompositionDevice

  • IDCompositionDevice::CreateScaleTransform – For making surface larger or smaller
  • IDCompositionDevice::CreateTranslateTransform – For moving suface along X or Y axes
  • IDCompositionDevice::CreateRotateTransform – for rotating a surface
  • IDCompositionDevice::CreateSkewTransform
  • IDCompositionDevice::CreateMatrixTransform – For creating a custom transform from your own calculations
  • IDCompositionDevice::CreateRectangleClip – For reducing what is displayed about a surface to a rectangular subregion

Once a transform is created, it can be added to a visual with IDCompositionVisual::SetTransform().

Surfaces

The surfaces required a bit more effort than the visuals to figure out. While it is true that IDXGISurface obejcts can be used as surfaces, there are some more specifications that were not immediately obvious to me from the documentation. That an object implements IDXGISurface does not alone inform that the object can be used. For displaying images from the file system, I used the WIC (Windows Imaging Component) library to convert a file stream to an in-memory bitmap. The bitmap is then rendered to an IDCompositionSurface. Though IDCompositionSurface objects may support different pixel formats, I suggest using DXGI_FORMAT_B8G8R8A8_UNORM with DXGI_ALPHA_MODE_PREMULTIPLIED. We can then call BeginDraw() on the surface to begin painting on it. We BitBlt the image to the suface, call EndDraw() on it, and it is now ready to be used as a surface for DirectComposition.

HBITMAP hBitmap = NULL, hBitmapOld = NULL;
UINT width = 0;
UINT height = 0;
POINT pointOffset = { 0, 0 };
POINT p{ 0,0 };
ComPtr< IDCompositionSurface> surface;

CreateHBitmapFromFile(L"Assets\\background.png", &hBitmap, width, height);
BOF(hr = dcompDevice->CreateSurface(width, height, DXGI_FORMAT_B8G8R8A8_UNORM, DXGI_ALPHA_MODE_PREMULTIPLIED, &surface));
BOF(hr = surface->BeginDraw(nullptr, __uuidof(IDXGISurface1), &drawToTexture, &p));
HDC hSurfaceDC, hBitmapDC;
BOF(hr = drawToTexture->GetDC(FALSE, &hSurfaceDC));
hBitmapDC = CreateCompatibleDC(hSurfaceDC);
hBitmapOld = (HBITMAP)SelectObject(hBitmapDC, hBitmap);
BitBlt(hSurfaceDC, pointOffset.x, pointOffset.y,
	width, height, hBitmapDC, 0, 0, SRCCOPY);
if (hBitmapOld)
{
	SelectObject(hBitmapDC, hBitmapOld);
}
DeleteDC(hBitmapDC);
BOF(hr = surface->EndDraw());

SwapChains

Though D3D11Texture2D objects implement the IDXGISurface interface, I could not directly bind them to DirectComposition. However, DirectComposition does accept SwapChain objects and I had success with binding to those. For a SwapChain to work as an object consumed by DirectComposition, there are some options that must be set for it. We create a SwapChain by initializing a SWAP_CHAIN_DESC structure specifying the options and dimensions. That object is passed as an argument to IDXGIFactory::CreateSwapChainForComposition(). If you have used Direct2D/Direct3D, you may be familiar with CreateSwapChainForHwnd(). While CreateSwapChainForHwnd() creates a swap chain that is bound to a displayable window, SwapChains created by CreateSwapChainForComposition() are bound to an object that is not automatically visible on the screen.

For the SWAP_CHAIN_DESC options, we must the .BufferUsage member to DXGI_USAGE_RENDER_TARGET_OUTPUT. The pixel format, specified through the .Format member, should be set to a compatible pixel format for out composition. I would once again suggest using DXGI_FORMAT_B8G8R8A8_UNORM.

HRESULT hr;
this->device = device;
this->context = context;
ComPtr<IDXGIDevice1> dxgiDevice;
ComPtr<IDXGIAdapter> dxgiAdapter;
hr = device.As(&dxgiDevice);
hr = dxgiDevice->GetAdapter(&dxgiAdapter);
ComPtr<IDXGIFactory2> dxgiFactory;
hr = dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory));
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory));
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {};
swapChainDesc.Width = 2160;
swapChainDesc.Height = 3840;
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDesc.Stereo = FALSE;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2;
swapChainDesc.Scaling = DXGI_SCALING_STRETCH;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;
swapChainDesc.AlphaMode = DXGI_ALPHA_MODE_PREMULTIPLIED;

hr = dxgiFactory->CreateSwapChainForComposition(
   device.Get(),
   &swapChainDesc,
   nullptr,
   &swapChain
);

Since this is for a test, I don’t need to render anything complex. Rendering a solid color to the buffer is sufficient.

r = swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), &backBuffer);
hr = device->CreateRenderTargetView(backBuffer.Get(), nullptr, &renderTargetView);
ID3D11RenderTargetView* rtv = renderTargetView.Get();
context->OMSetRenderTargets(1, &rtv, nullptr);
D3D11_VIEWPORT vp = {};
vp.Width = 2160.0f;
vp.Height = 3840.0f;
vp.MinDepth = 0.0f;
vp.MaxDepth = 1.0f;
context->RSSetViewports(1, &vp);
float clearColor[4] = { 0.2f, 0.4f, 0.8f, 1.0f };
context->ClearRenderTargetView(renderTargetView.Get(), clearColor);
//context->ClearDepthStencilView(depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
hr = swapChain->Present(1, 0);

Once something is rendered to the SwapChain, we can bind it directly to a composition.

ComPtr<IDCompositionVisual> d3dVisual;
BOF(hr = dcompDevice->CreateVisual(&d3dVisual));
BOF(hr = d3dVisual->SetContent(displaySwapChain.Get()));

Transformations

Transformations are one of the main points of DirectComposition. Through transformations and the visual hierarchy expressed through the connections between IDCompositionVisuals a and IDCompositionSurfaces, our various visual objects are arranged, scaled, rotated, and blended together. Despite their importance, I won’t cover them much here beyond what is necessary to show that I’m combining graphics from two different rendering systems. The renders that I’m passing to DirectComposition are 4K portrait mode. I want DirectComposition to place the visuals side-by-side. But I also want the visuals to be scaled down to 25% for the sake of having them fit on my laptop screen. To scale down to 25%, I create a scale transform, assign the x and y axis scale factors, and then attach the transform to the visual.

ComPtr<IDCompositionScaleTransform> scaleTransform;
BOF(hr = dcompDevice->CreateScaleTransform(&scaleTransform));
scaleTransform->SetScaleX(0.25);
scaleTransform->SetScaleY(0.25);
dcompRootVisual->SetTransform(scaleTransform.Get());

Since I am applying this scale to the root visual, it will affect all child visuals. Both of my visuals will be scaled. To position them side-by-side, I set the X position of one image to 0px and the next to 2200px.

BOF(hr = d3dVisual->SetContent(displaySwapChain.Get()));
d3dVisual->SetOffsetX(2200);
d3dVisual->SetOffsetY(100);

The transforms provided by DirectComposition are the following.

For the prototype that I was working on, this was enough information for me to mix the 2D and 3D rendering technologies to make a representation of an application with fluid animation. Whether these rendering technologies are used in the final version is yet to be known. I might use WinUI (which gives higher level access to some of these features) or ThreeJS with HTML. The decision remains to be made. In any case, I enjoyed taking a look at this approach.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net