Compiling OpenSSL for JavaCard on Windows with Visual Studio

I needed to compile OpenSSL on Windows in preparation for some JavaCard work. While OpenSSL can be compiled with a range of compilers, I wanted to specifically use Visual Studio because that is the compiler that is generally available on my machines. In addition to Visual Studio, I also needed a build of perl installed and available in the system path. After cloning OpenSSL and navigating to the root of the repository, the next step is generally to configure the build. For the JavaCard tools, a 32-bit version of OpenSSL is needed. I ran into some problems initially with part of the build process targeting 64-bit architecture. To prevent this from happening, some environment variables can be set to ensure the 32-bit version of the tools is used. Visual Studio provides a batch file for setting these environment variables that we can use. Below is the path at which I found this batch file. For you, it may vary depending on the edition of Visual Studio that you have.

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars32.bat

Open a command terminal and run this batch file. Then you can start the build process. To configure the build process for 32-bit Windows with the options needed for the JavaCard environment, use the following command from the repository root.

perl Configure VC-WIN32 no-asm no-threads enable-weak-ssl-ciphers

If you wanted to make a general build, you could omit most everything after VC-WIN32. For a 64-bit build, use `VC-WIN64A.

Now for the long part. After this next command, if you were planning on making coffee or having a quick bite to eat, a good time is about to present itself. From the root of the repository, run the following command.

nmake

If you come back and find that the build process has terminated with a complaint about mixing 32-bit and 64-bit code, then that means that the system is using the 65-bit version of the tools. This will happen if you forgot to run the batch file that I mentioned earlier. If you would like to run the unit tests for OpenSSL, use the following command.

nmake test

This process also takes a significant amount of time. When it completes, the last step is to install OpenSSL. This command will likely fail unless you open an instance of the Visual Studio command prompt with Administrative priviledges.

nmake install

This command will place OpenSSL within c:\Program Files\OpenSSL. The executables themselves are in c:\Program Files\OpenSSL\bin.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Unresolved external symbol WKPDID_D3DDebugObjectName (LNK2001)

I opened an old Direct3D program and tried recompiling it, only get the error LNK2001 Unresolved external symbol WKPDID_D3DDebugObjectName. This is obviously an error from a definition for an element missing. I checked the source code and I saw that the object of interest was defined in d3dcommon.h. Confusing at first, but I finally realized that for the object to be resolved by the linker I needed to include dxguid.lib in the project. There are a few ways to link to a library. I prefer to explicitly link in source code instead of linking in the project settings. In one of my sources files, I only needed to include the following.

#pragma comment(lib, "dxguid.lib")

I only need this file linked when I am compiling in debug mode. A conditional compilation statement wrapped around this will take care of making it conditionally linked.

#if defined(_DEBUG)
#pragma comment(lib, "dxguid.lib")
#endif

With that change, the program compiles and the error has gone away!

For those curious, the D3D program in question is something I have appended in the C++ Application Base Class project. One day I intend to make a base class for a D3D program base class to go along with the D2D base class. The beginnings of my experimentation for it are within that project.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

C++ Custom Deleters

Some organizations and entities (including the White House) have advised against using C/C++ and use memory safe languages with memory safe features instead. While I can understand the motivation for such encouragement, realistically complete abandonment of the language isn’t practical. Managing low-level resources in other languages can both be cumbersome and doesn’t necessarily insulate someone from resource leaks. There are not always higher-level libraries available for functionality that one wishes to use; they may have to build a library themselves and embrace management of those low level resources. But that said, when writing code in C++, one can use safer approaches to doing so. One approach is to use std::shared_ptr<T> instead of using pointers directly.

Shared pointers implement reference counters and will delete the underlying memory once that reference count reaches zero. This is a feature that is often common in some other high level languages. Instead of using the new and delete commands to allocate and release memory, one could use std::make_shared. For other blocks of data for which you might have manually allocated memory, you could use other standard template library classes, such as using a std::vector instead of an array.

Sometimes a resource in question was allocated by the operating system. It may be up to the developer to manage the release or deletion of the object. These can still be managed with std::shared_ptrs<T> objects. Let’s take a look at a simple program that reads a program into a buffer,

#include <iostream>
#include <Windows.h>


const DWORD64 MAX_FILE_SIZE = 64 * 1024;//64 kilobytes


int main(int argc, char** argv)
{
	if (argc < 2)
	{
		std::wcout << L"Usage: ShowFileContents <filename>" << std::endl;
		return 1;
	}
	std::string filename = argv[1];
	std::wstring wfilename = std::wstring(filename.begin(), filename.end());
	HANDLE hFile = CreateFile(wfilename.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, NULL, NULL);
	DWORD fileSizeHigh, fileSizeLow;
	DWORD64 fileSize =  -1;
	DWORD bytesRead = -1;

	fileSizeLow = GetFileSize(hFile, &fileSizeHigh);
	fileSize = ((DWORD64)fileSizeHigh << 32) + fileSizeLow;
	if (fileSize > MAX_FILE_SIZE)
	{
		std::wcout << L"File is too big to read" << std::endl;
		CloseHandle(hFile);
		return 1;
	}
	std::wcout << L"File size: " << fileSize << std::endl;
	char* buffer = new char[fileSize + 1];
	ZeroMemory(buffer, fileSize + 1);
	ReadFile(hFile, buffer, fileSize, &bytesRead, NULL);
	std::wcout << L"File contents: " << std::endl;
	std::wcout << buffer << std::endl;
    delete buffer;
	CloseHandle(hFile);

	return 0;
}

There first thing I see that can be replaced is a call to the new and delete that could be removed. I’ll replace the use of this buffer with a vector<T>. Since I am using a vector, I don’t need to explicitly allocate and deallocate memory. Instead, I can specify how much memory is needed in its declaration. When the std::vector falls out of scope, it will be deallocated automatically. I do make use of a pointer to the vector’s memory. It is accessible through the method std::vector<T>::data(). The ReadFile method needs a pointer to the memory in which it will deposit its data. That’s provided by way of this method.

There is also a HANDLE variable used for managing the file. It is named hFile. I’ve written on wrapping these in unique pointers before. You can read about that here. In that post, I implemented a Functor that contains the definition for how the handle is to be deleted. Rather than manually ensure I associate the functor with the shared pointer, I had also made a function that would handle that for me to ensure it is done the same way every time. This can also be used with a std::shared_ptr<T>. Though you should generally only do this if you really need to share the resource with more than one object. On a unique pointer, the deleter is part of the object type. On a shared pointer, the deleter is not part of the type, but is stored in instance data for the pointer. I’ll replace my usage of CreateFile (the Win32 function) with wrapper function that returns the handle as a std::shared_ptr. That wrapper function looks like this.

using HANDLE_shared_ptr = std::shared_ptr<void>;

HANDLE_shared_ptr CreateFileHandle(
	std::wstring fileName, 
	DWORD dwDesiredAccess = GENERIC_READ, 
	DWORD dwShareMode = FILE_SHARE_READ, 
	LPSECURITY_ATTRIBUTES lpSecurityAttributes = NULL, 
	DWORD dwCreationDisposition = OPEN_EXISTING, 
	DWORD dwFlagsAndAttributes = 0, 
	HANDLE hTemplateFile = NULL)
{
	//std::shared_ptr<HANDLE> x = nullptr;
	HANDLE handle = CreateFile(fileName.c_str(), dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile);
	if (handle == INVALID_HANDLE_VALUE || handle == nullptr)
	{
		return nullptr;
	}
    return 	std::shared_ptr<void>(handle, HANDLECloser());	
}

In the following, you can see the new implementation of my main() method. Notice that in the ReadFile method for the std::shared_ptr<T> that I am calling its get() method to pass the HANDLE value to the function. I’m nolonger explicitly invoking the call to CloseHandle(). Instead, when the main() method returns, the deleter will be invoked indirectly. If you set a breakpoint on it you’ll see when this happens.

int main(int argc, char** argv)
{
	DWORD fileSizeHigh, fileSizeLow;
	DWORD64 fileSize = -1;
	DWORD bytesRead = -1;
	if (argc < 2)
	{
		std::wcout << L"Usage: ShowFileContents <filename>" << std::endl;
		return 1;
	}
	std::string filename = argv[1];
	std::wstring wfilename = std::wstring(filename.begin(), filename.end());

	auto fileHandle =  CreateFileHandle (wfilename.c_str());


	fileSizeLow = GetFileSize(fileHandle.get(), &fileSizeHigh);
	fileSize = ((DWORD64)fileSizeHigh << 32) + fileSizeLow;
	if (fileSize > MAX_FILE_SIZE)
	{
		std::wcout << L"File is too big to read" << std::endl;
		return 1;
	}
	std::wcout << L"File size: " << fileSize << std::endl;
	std::vector<char> buffer(fileSize / sizeof(char) + 1, 0);
	ReadFile(fileHandle.get(), buffer.data(), fileSize, &bytesRead, NULL);
	std::string bufferText = std::string(buffer.begin(), buffer.end());
	std::wcout << L"File contents: " << std::endl;
	std::cout << bufferText << std::endl;

	return 0;
}


You’ll see use of this soon in an upcoming post on SmartCards. The code examples for it make Windows API calls to the Smart Card functions. I’ll be making use of shared pointers with deleters for managing the resources in that project.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

A Quick Introduction to Cosmos DB in C#

This is for those that need to start getting productive within Cosmos DB in a hurry. There’s a lot that could be discussed, but I think you’ll first want to be able to setup a local development environment and get to writing data to it, and reading that data. In this walkthrough, I’ll show how to make a connection to your local/development instance of Cosmos DB. Configuring a production connection is a little more involved. If you are trying to get started, that isn’t an immediate concern. I won’t cover it here. I’ve focused specifically on C# instead of targeting multiple languages to keep this shorter. Let’s get started. You need to first install the Cosmos DB emulator.

Installing the Cosmos DB Emulator

You can download the Cosmo DB Emulator from Microsoft at this address: https://aka.ms/cosmosdb-emulator. You can start the emulator from the command line. For ease of starting it, I would suggest adding the program’s path to your path environment variable. Once installed and the path updated, you can start the Cosmo DB Emulator with the following command.

Microsoft.Azure.Cosmos.Emulator.exe

By default, it will run on port 8081. If you would like to run this on a different port, use the /port=[port number] parameter. Once the emulator is running, you can the contents of the emulator at the URL https://localhost:8081/_explorer/index.html. This view will show you information for connecting to the emulator instance. Note that the information shown here will be the same on every computer on which you run this. The emulator is only for testing and not for production. The emulator accepts communication over TLS. For this purpose, the installation of emulator also resulted in the installation of a development certificate for encrypting the TLS traffic.

Create a new project and either use the package manager to add a reference to Microsoft.Azure.Cosmos. If you are using the command line to manage your project, from your project directory use the following command.

dotnet add package Microsoft.Azure.Cosmos

Creating the Database and Container in C#

With that in place, we can get into the code. start by adding a using statement for the code.

using Microsoft.Azure.Cosmos;

There are a few objects that we need to create. We need to create a client object for connecting to the database, the database object itself, and containers within the database. If you are familiar with traditional databases, then containers are similar to table. When we create the container, the values put in it will have an identifier value in a field named id. We also identify a field on which the data will be logically grouped/partitioned. For my example, I’m using an employee database and am parittioning on the department to which someone is assigned. The partition key identifier is expressed in what looks like a file path. But insted of a path in nested folders, this would be a field in what could be nested objects. If the partition key is at the root of the object, then this path will look like a file path to a file that is in the root directory.

For our local test, we will be using a resource key that is the default for any local instance of the Cosmos DB service. We would **not** be using this in a production environment. But for our local tests, it is fine.

const string RESOURCE_TOKEN = "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==";
using CosmosClient client = new(accountEndpoint: "https://localhost:6061", authKeyOrResourceToken: RESOURCE_TOKEN);
Database database = await client.CreateDatabaseIfNotExistsAsync("employeeDatabase");
Container employeeContainer = await database.CreateContainerIfNotExistsAsync("employeeContainer", "/id");

If we run the above vode and then open our browser to the Cosmos DB explorer, we will find that there’s a new container named employeeContainer. Though, there is no data in it.

Adding an Item to the Container

We could add an item to the table with only one or two more statements of code. To put an object into the container, we can create and initialize an object. Then we UpSert it into the database.

var item = new Employee()
{
    ID = Guid.NewGuid(),
    Name = "Joel Johnson"
};
await employeeContainer.UpsertItemAsync(item);

Now if we run the code and look in the Cosmos DB explorer, we will see our item. In addition to the fields from the public elements of our object, there are some additional fields prefixed with an underscore (_) that have additional metadata on our object, such as the time stamp (_ts) and etag (_etag).

Reading an Item from the Container

If we wanted to retrieve a specific item from the container and we know its id value and partition key value, we can use the ReadItemAsync<T> method on the container to retrieve the item. This method will deserialize the contents and return our data as an object.

var readValue = await employeeContainer.ReadItemAsync<Employee>(
    id: idvalue.ToString(), 
    partitionKey: new PartitionKey("IT"));

We could also read the item as a stream. Readin gthe item this way will result in all the data associated with the item being read, including the fields that have additional metadata.

var readValue = await employeeContainer.ReadItemAsync<Employee>(
    id: idvalue.ToString(), 
    partitionKey: new PartitionKey("IT"));

var itemStream = await employeeContainer.ReadItemStreamAsync(
                    id: idvalue.ToString(), 
                    partitionKey: new PartitionKey("IT")
);
using ( StreamReader readItemStreamReader = new StreamReader(itemStream.Content))
{
    string content = await readItemStreamReader.ReadToEndAsync();
    Console.WriteLine(content);
}

Querying an Item

You probably don’t know the exact ID of the valu(s) that you want to read. But you may know something about other data for items you want. Ironically, while Cosmo DB is a “NoSQL Database”, it supports queryign with SQL.

using  FeedIterator<Employee> feedIterator = employeeContainer.GetItemQueryIterator<Employee>(
                    queryText: "SELECT * FROM c WHERE c.dept = 'IT'");
while(feedIterator.HasMoreResults)
{
    FeedResponse<Employee> response = await feedIterator.ReadNextAsync();
    foreach(Employee employee in response)
    {
        Console.WriteLine($"Found item {employee}");
    }
}

You wouldn’t want your code to be vulnerable to SQL injection attacks. If a parameter could vary, you don’t want to construct a string. You want to pass the value as parameterized input. In the SQL query, named parameters are prefiexed with the @ symbol. In the above, if we wanted to pass the department as a parameter instead of embedding it in the query, we would use code like the following.

QueryDefinition query = new QueryDefinition("SELECT * FROM c WHERE c.dept = @dept")
    .WithParameter("@dept", "IT");
using FeedIterator<Employee> feedIterator = employeeContainer.GetItemQueryIterator<Employee>(query);

If you are familiar with LINQ, you could also use that to query information also. The container’s GetItemLinqQuerable<T>() method returns an object that you can use for LINQ queries.

var employeeLinqContainer = employeeContainer.GetItemLinqQueryable<Employee>(allowSynchronousQueryExecution: true );
var employeeQuery = employeeLinqContainer
    .Where(e => e.Department == "IT")
    .Where(e => e.DateOfBirth > new DateTime(1970, 01, 01))
    .OrderBy(e => e.Name);
            
foreach(var employee in employeeQuery)
{
    Console.WriteLine($"Found item {employee}");
}

I hope that this was enough to get you started!


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet