Setting Up for Pi Pico Development (2025)

In a previous post, I mentioned that I was re-introducing myself to development for the Pi Pico. The Pico is a microcontroller, often compared to an Arduino, that can be programmed from a Linux, Mac, or Windows machine. The Pico is based on the RP2040 chip. This is an ARM based Cortex-M0 dual core processor, generally running between 125 and 133 MHz. It has 264 KB of SRAM, 2 MB of flash memory, 26 general purpose IO pins, some of which support additional functionality. The other functionality overlaid on these pins includes

  • 2 UART pins
  • 2 SPI controllers
  • 2 I2C controllers
  • 16 PWM channels

There are several development boards that use the RP2040. Collectively, I generically refer to all of these as Pico. It is a bit easier to say then “RP2040 based board.”

A smaller RP2040 based board by WaveShare

I already had a few machines setup for development for the Raspberry Pi Pico. While that procedure still works, as do those development machines, I was recently reintroducing myself to Pico development. I started with a clean installation and went to the currently published instructions for setup. The more recent instructions are a lot easier to follow; there are less dependencies on manually setting paths and downloading files. The easier process is made possible through a Visual Studio Code plugin. This extension, which is still labeled as a zero version at the time that I am making this post (0.17.3) adds project generation and sample code along with scripts and automations for common tasks. To get started, just Install the Raspberry Pi Pico Visual Studio Code Extension. Once it is installed, you’ll have a new icon on the left pane of VS Code for Pico related tasks.

The first time you do anything with this icon, expect it to be slow. It installs the other build tools that it needs on-demand. I prefer to use the C++ build tools. Most of what I write here will be focused on that. I’ll start with creating a new C++ project. Double-clicking on “New C/C++ Project” from the Pico tools panel gets the process started.

This will only be a “Hello World” program. We will have the Pico print a message to a serial port in a loop. The new project window lets us specify our target hardware, including which hardware features that we plan to use. Selecting a feature will result in the build file for the project linking to necessary libraries for that feature and adding a small code sample that access that feature. Select a folder in which the project folder will be created, enter a project name, and check the box labeled “Console over USB.” After selecting these options, click on the “Create” button.

This is the part that takes a while the first time. A notification will show in VS Code stating that it is installing the SDK and generating the project. The wait is only a few minutes. While this is executing, it is a good time to grab a cup of coffee.

When you get back, you’ll see VS Code welcome you with a new project. The default new project prints “Hello, world!\n” in a loop with a 1 second delay. Grab your USB cable and a Pico. We can immediately start running this program to see if the build chain works. On the Pico, there’s a button. Connect your USB cable to your computer, then connect the Pico, making sure you are holding down this button as you connect it. The Pico will show up on your computer as a writable drive. After you’ve done this, take note of which serial ports show up on your computer. In my case, I’m using Windows, which shows that Com1 is the only serial port. In VS Code, you now have several tasks for your project that you can execute. Double-click on Run Project (USB). The code will compile, deploy to the Pico, and the Pico will reboot and start running the code.

Check to see what serial ports exist on your computer now. For me, there is a new port named Com4. Using PuTTY, I open Com4 at a baud rate of 115,200. The printed text starts to show there.

Using the USB UART for output is generally convenient, but at time you may want to use the USB for other features. The USB output is enabled or disabled in part through a couple of lines in the CMakeList.txt file.

pico_enable_stdio_uart(HelloWorldSample 0)
pico_enable_stdio_usb(HelloWorldSample 1)

The 1 and 0 can be interpreted as meaning enable and disable. Swap these values and run the project again by disconnecting the Pico, reattach while pressing the button, and then selecting the Run Project (USB) option from VS Code. When you run the code this time, the output is being transmitted over GPIO pins 0 and 1. But how do we read this?

FTDI USB

FTDI is the name of an integrated circuit manufacturer. For microcontroller interfacing, you might often see people refer to “FTDI USB” cables. These are USB devices that have 3 or 4 pins for connecting to other serial devices. These are generally cheaply available. The pins that we care about will be labeled GND (Ground), TX (Transmit), and RX (Receive). The transmit pin on one end of a serial exchange connects to the receive end on the other, and vice versa. On the Pico, the default pins used for uart0 (the name of our serial port) are GP0 for TX and GP1 for RX. When connecting an FTDI device, connect the FTDI’s RX to the Pico’s TX on GPO, then the FTDI’s TX to the Pico’s RX (on GP1), and finally the FTDI’s ground to the Pico’s ground.

GPIO – Setting a Pin

Many, Pico’s have a LED attached to one of the pins that is immediately available for test programs. While many do, not all do. On the Pi Pico and Pi Pico 2, GPIO 25 is connected to a LED. On the Pi Pico W, the LED is connected to the WiFi radio and not the RP2040 directly. For uniformity, I’ll drive an external LED. I’ve taken a LED and have it connected in series with a resistor. 220ฮฉ should be a sufficient value for the resistor. I’m connecting the longer wire of the LED to GP5 and the shorter pin to ground.

In the code, the pin number is assigned to a #define. This is common, as it makes the code more flexible for others that may be using a different pin assignment. Before we can start writing to the pin, we need to gall an initialize function for the pin number named gpio_init(). After the initialization, we need to set the pin to be either in input or output mode. Since we are going to be controlling a LED, this needs to be output mode. This is done with a call to gpio_set_dir() (meaning “set direction”) passing the pin number as the first argument, and the direct (GPIO_IN or GPIO_OUT) as the second argument. For writing, we use GPIO_OUT. With the pin set to output, we can drive the pin to a high or low state by calling gpio_put(). The pin number is passed in the first argument, and a value indicating whether it should be in a high or low state in the second argument. A zero value is considered low, while a non-zero value is considered high. To make it apparent that the LED is being driven by our control of the pin (and not that we just happened to wire the LED to a pin that is always high) we will turn the light on and off once per second. In a loop, we will turn the light on, wait half a second, turn the light off, and wait again.

#include <stdio.h>
#include "pico/stdlib.h"

#define LED_PIN 5
int main()
{
    stdio_init_all();
    gpio_init(LED_PIN);
    gpio_set_dir(LED_PIN, GPIO_OUT);

    while (true) {
        gpio_put(LED_PIN, 1);   
        sleep_ms(500);
        gpio_put(LED_PIN, 0);
        sleep_ms(500);
    }
}

When we run the code now, we should see the light blink.

Up Next: Programmable IO – The Processor within the Processor

While the GPIO system can be manipulated by the main processor core, there are also smaller processors on the silicon that exist just for controlling the GPIO. These processors have a much smaller reduced set but are great for writing deterministic code that controls the pins. This system of sub-processors and the pins that they control are known as “Programmable IO.” They are programmed using assembler. There’s much to say about PIO. In the next post that I make on the Pico, I’ll walk you through an introduction to the PIO system.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

Rediscovering Pi Pico Programming with an IR Detector

I’ve used a Pi Pico before. But it has been a while, and I decided to jump back into it in furtherance of some other project I want to do. I’m specifically using a Pico W on a Freenove breakout board. The nice thing about this board is that all the GPIOs have status LEDs that lets you monitor the state of each GPIO visually. For those that might have immediate concern, the LEDs are connected to the GPIO via hex inverters instead of directly. This minimizes the interaction that they may have with devices that you connect to them.

Blinking the Light

About the first program that one might try with any micro controller is to blink a light. I accomplished that part without issue. But for those that are newer to this, I’ll cover in detail. Though I won’t cover the steps of setting up the SDK.

I’ve made a folder for my project. Since I plan to evolve this project to work with an infrared detector, I called my project folder irdetect. I’ve made two files in this folder.

  • CMakeList.txt – the build configuration file for the project
  • main.cpp – the source code for the project

For the CMakeList.txt file, I’ve specified that I’m using the C++ 23 standard. This configuration also informs the make process that main.cpp is the source file, and that the target executable name will be irdetect.

cmake_minimum_required(VERSION 3.13)

include(pico_sdk_import.cmake)

project(test_project C CXX ASM)
set(CMAKE_C_STANDARD 11)
set(CMAKE_CXX_STANDARD 23) #Latest C__ Standard available
pico_sdk_init()

add_executable(irdetect
   main.cpp
)

pico_enable_stdio_usb(irdetect 1)
pico_enable_stdio_uart(irdetect 1)
pico_add_extra_outputs(irdetect)

The initial source code for blinking a LED is alternating the state of a random GPIO pin. Since I’m using a breakout board with LEDs for all the pins, I am not restricted to one pin. For the pin I selected, it is necessary to call gpio_init() for the pin, and then set its direction to output through gpio_set_dir(). If you don’t do this, then attempts to write to the pen will fail (speaking from experience!).

#include <stdio.h>
#include "pico/stdlib.h"
#include "hardware/gpio.h"
#include "pico/binary_info.h"
#include "pico/cyw43_arch.h"


const uint LED_DELAY_MS = 250; //quarter second
#ifdef PICO_DEFAULT_LED_PIN
const uint LED_PIN = PICO_DEFAULT_LED_PIN;
#else
const uint LED_PIN = 15;
#endif


// Initialize the GPIO for the LED
void pico_led_init(void) {
	gpio_init(LED_PIN);
	gpio_set_dir(LED_PIN, GPIO_OUT);
}

// Turn the LED on or off
void pico_set_led(bool led_on) {
	gpio_put(LED_PIN, led_on);
}

int main()
{
	stdio_init_all();
	pico_led_init();

	while(true)
	{
		pico_set_led(true);
		sleep_ms(LED_DELAY_MS);
		pico_set_led(false);
		sleep_ms(LED_DELAY_MS);
	}
	return 0;
}

To compile this, I made a subfolder named build inside of my project folder. I’m using a Pico W. When I compile the code, I specify the Pico board that I’m using.

cd build
cmake .. -DPICO_BOARD=pico_w
make

Some output flies by on the screen, after which build files have been deposited into the folder. the one of interest is irdetect.u2f. I need to flash the Pico with this. The process is extremely easy. Hold down the reset button on the Pico while connecting it to the Pi. It will show up as a mass storage device. Copying the file to the device will cause it to flash and then reboot. The device is automatically mounted to the file system. In my case, this is to the path /media/j2inet/RPI-RP2

cp irdetect.u2f /media/j2inet/RPI-RP2

I tried this out, and the light blinks. I’m glad output works, but now to try input.

Reading From a Pin

I want the program to now start off blinking a light until it detects an input. When it does, I want it to switch to a different mode where the output reflects the input. In the updated source I initialize an addition pin and use gpio_set_dir to set the pin as an input pin. I set an additional pin to output as a convenience. I need a positive line to drive the input high. I could use the voltage pin with a resistor, but I found it more convenient to set another GPIO to high and use it as my positive source for now.

#include <stdio.h>
#include "pico/stdlib.h"
#include "hardware/gpio.h"
#include "pico/binary_info.h"
#include "pico/cyw43_arch.h"


const uint LED_DELAY_MS = 50;
#ifdef PICO_DEFAULT_LED_PIN
const uint LED_PIN = PICO_DEFAULT_LED_PIN;
#else
const uint LED_PIN = 15;
#endif
const uint IR_READ_PIN = 14;
const uint IR_DETECTOR_ENABLE_PIN = 13;


// Initialize the GPIO for the LED
void pico_led_init(void) {
        gpio_init(LED_PIN);
        gpio_set_dir(LED_PIN, GPIO_OUT);

        gpio_init(IR_READ_PIN);
        gpio_set_dir(IR_READ_PIN, GPIO_IN);

        gpio_init(IR_DETECTOR_ENABLE_PIN);
        gpio_set_dir(IR_DETECTOR_ENABLE_PIN, GPIO_OUT);
}

// Turn the LED on or off
void pico_set_led(bool led_on) {
        gpio_put(LED_PIN, led_on);
}

int main()
{
        stdio_init_all();
        pico_led_init();
        bool irDetected = false;
        gpio_put(IR_DETECTOR_ENABLE_PIN, true);
        while(!irDetected)
        {
                irDetected = gpio_get(IR_READ_PIN);
                pico_set_led(true);
                sleep_ms(LED_DELAY_MS);
                pico_set_led(false);
                sleep_ms(LED_DELAY_MS);
        }

        while(true)
        {
                bool p = gpio_get(IR_READ_PIN);
                gpio_put(LED_PIN, p);
                sleep_us(10);
        }
        return 0;
}

When I run this program and manually set the pin to high with a resistor tied to an input, it works fine. My results were not the same when I tried using an IR detector.

Adding an IR Detector

I have two IR detectors. One is an infrared photoresistor diode. This component has a high resistance until it is struck with infrared light. When it is, it becomes low resistance. Placing that component in the circuit, I see the output pin go from low to high when I illuminate the diode with an IR flashlight or aim a remote control at it. Cool.

I tried again with a VS138B. This is a three pin IC. Two of the pins supply it with power. The third pin is an output pin. This IC has a IR detector, but instead of detecting the presence of IR light, it detects the presence of a pulsating IR signal provided that the pulsing is within a certain frequency band. The IC is primarily for detecting signals sent on a 38KHz carrier. I connected this to my Pico and tried it out. The result was no response. I can’t find my logic probe, but I have an osciloscope. Attaching it to the output pin, I detected no signal. What gives?

This is where I searched on the Internet to find the likely problem and solutions. I found other people with similar circuits and problems, but no solutions. I then remembered reading something else about the internal pull-up resistors in Arduinos. I grabbed a resistor and connected my input pin to a pin with a high signal and tried again. It worked! The VS138B signals by pulling the output pin to a low voltage. I went to Bluesky and posted about my experience.

https://bsky.app/profile/j2i.net/post/3lgar7brqfs2n

Someone quickly pointed out to me that there are pull-up resistors in the Pi Pico. I just must turn them on with a function call.

those can be activated at runtime:gpio_pull_up (PIN_NUMBER);This works even for the I2C interface.

Abraxolotlpsylaxis (@abraxolotlpsylaxis.bsky.social) 2025-01-21T12:18:18.964Z

I updated my code, and it works! When I attach the detector to the scope, I also see the signal now.

Now that I can read, the next step is to start decoding a remote signal. Note that there are already libraries for doing this. I won’t be using one (yet) since my primary interest here is diving a bit further into the Pico. But I do encourage the use of a third-party library if you are aiming to just get something working with as little effort as possible.

Code Repository

While you could copy the code from above, if you want to grab the code for this, it is on GitHub at the URL https://github.com/j2inet/irdetect/. Note that with time, the code might transition to something that no longer resembles what was mentioned in this post.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

A Pi Pico Breakout Board

I’m trying out a few things with Raspberry Pi Pico variants, and used a breakout board that I found to be especially convenient. I’m taking a moment to talk about it here and why I liked it. Generally, when I’ve worked with single board computers and Microcontrollers I’ve started off using a breadboard for any circuitry that I wanted to connect to it. There are times when that feels like overkill, such as just when connecting a couple of connectors to the board. In these cases, the breakout board is especially convenient.

When I ordered my boards, I didn’t get all boards of the same type. The board that stands out is one that stands out is from Freenove (affiliate link). There are a few things that make it stand out from the other boards. A small but noticeable convenience is that this board comes with a small screwdriver for the terminal block headers. This board also came fully assembled; many of the other boards ship as a circuit board with components that need to soldered to be usable. The most stand-out feature of the board is that it has status LEDs. There’s a LED for each one of the GPIO pins along with for power and some other signals.

Many microcontroller boards and SBCs have a LED that can be driven by one of the GPIOs, which is great when testing that “Hello World” program and ensuring that your build tools are successful. With the status on other pins, it becomes easier to diagnose otherwise simple programming errors. In one case, I forgot to initialize a pin as an output pin and was able to visually observe that nothing was being written. There was no need to attach the probes to identify what was actually happening.

All of the breakout boards I tried had some form of labelling on the pins. Unfortunately, that text is generally a little too small for me to read. But the Freenove board colors the GPIO and GND labels differently, making it easier to at a glance differentiate between pins. I’ll talk more about one of my experiences in a following post.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Bluesky: @j2i.net

Building a Wake On Lan Packet

Source code on GitHub

Though I’m aware that a computer can be configured to wake up when it receives a specific LAN packet, I’ve not used the feature until now. I was motivated to do so after a few incidents when I had driven to the office and realized that I had forgotten to push my code from a computer that was at home. It has happened before, in which case I would remote into the computer and perform a push. But a few times that this happened, the computer had already gone to sleep, and I had to persuade someone to press a key on the keyboard to wake it up.

It has happened more than once. I decided it was time to do something about it. The first couple of things that I needed to do were to ensure that the Wake On Lan (WOL) feature was turned on in the BIOS. How this is done may vary from one computer to another. I also needed to get the MAC address for my computer’s active network adapter. Traditionally, WOL has been a feature for wired network adapters only. This is fine for me, since my desktop computers are all on wired connections. From a PowerShell terminal, the MAC addresses for all of the network adapters can be viewed with the command GET-NETADAPTER.The output looks like the following.

PS C:\Users\joel> get-netadapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
Bluetooth Network Conn... Bluetooth Device (Personal Area Netw...      11 Disconnected 23-55-D8-7B-36-B1         3 Mbps
Ethernet                  Realtek PCIe GbE Family Controller            7 Disconnected 70-36-BC-23-44-66          0 bps
Wi-Fi                     Intel(R) Wi-Fi 6E AX211 160MHz                5 Up           23-55-D8-7B-36-AD       400 Mbps

The Packet Structure

The structure of the WOL packet is simple. We only need to build this packet and send it over a UDP broadcast message. The packet is 102 bytes in length. The first 6 bytes are just 0xFF repeated 6 times. The rest of the bytes of the packet are the MAC address repeated 16 times. Once the packet is build, it must be sent on a broadcast message. The reason that we send it via a broadcast message is the target computer might not even have an IP address, since it isn’t turned on. Sending it to all computers on a subnet ensures that our target computer will receive the message.

Building Packet in C++

std::vector<BYTE> MacAddressToByteArray(std::wstring macAddress)
{
	std::vector<BYTE> macAddressBytes;
	std::wstring macAddressPart;
	for (size_t i = 0; i < macAddress.size(); i++)
	{
		if (macAddress[i] == L':')
		{
			macAddressBytes.push_back((BYTE)std::stoi(macAddressPart, nullptr, 16));
			macAddressPart.clear();
		}
		else
		{
			macAddressPart.push_back(macAddress[i]);
		}
	}
	macAddressBytes.push_back((BYTE)std::stoi(macAddressPart, nullptr, 16));
	return macAddressBytes;
}


void SendWOL(std::vector<BYTE> macAddress)
{
	std::vector<BYTE> magicPacket;
	for (size_t i = 0; i < 6; i++)
	{
		magicPacket.push_back(0xFF);
	}
	for (size_t i = 0; i < 16; i++)
	{
		for (size_t j = 0; j < macAddress.size(); j++)
		{
			magicPacket.push_back(macAddress[j]);
		}
	}
	BroadcastMessage(magicPacket);
}

void SendWOL(std::wstring macAddress) {
	auto bytes = MacAddressToByteArray(macAddress);
	SendWOL(bytes);
}

Building the Packet in C#

static void SendWOL(IEnumerable<byte> MACAddress)
{
    byte[] packet = new byte[102];
    for (int i = 0; i < 6; i++)
        packet[i] = 0xFF;
    for (int i = 0; i < 102-6; i ++)
    {
        packet[i + 6] = MACAddress.ElementAt(i%6);
    }
    UdpClient client = new UdpClient();
    client.Client.Bind(new IPEndPoint(IPAddress.Any, 0));
    client.Send(packet, packet.Length, new IPEndPoint(IPAddress.Broadcast, 9));
}

static void SendWOL(String MACAddress)
{
    var parts = MACAddress.Split(new char[] { ':', '-' });
    if (parts.Length != 6)
        return;
    byte[] mac = new byte[6];
    for (int i = 0; i < 6; i++)
        mac[i] = Convert.ToByte(parts[i], 16);
    SendWOL(mac);
}

static List<String> GetMacAddressList(String[] args)
{
    List<String> retVal = new List<String>();
    foreach (var arg in args)
    {
        if (MacAddressRegex.IsMatch(arg))
            retVal.Add(arg);
    }
    return retVal;
}

Sending the Packet

The packet must be broadcast on UDP. We use broadcast because the computer doesn’t have an IP address that can be used for sending a unicast message directly to it. It doesn’t matter what port the message is sent on. But we will use Port 9 since many routers are configured to allow UDP traffic on that port. In the C# code, broadcasting the package is simple. It can be be done in three lines.

UdpClient client = new UdpClient();
client.Client.Bind(new IPEndPoint(IPAddress.Any, 0));
client.Send(packet, packet.Length, new IPEndPoint(IPAddress.Broadcast, 9));

The C++ code is using WinSock2 for network communication. Using it is more involved than using the UDPClient object in .Net, but it isn’t complex. We create a datagram socket object, and then enable its broadcast option. We set the target port to 9, and specify its target address is the UDP broadcast address (255.255.255.255). Then we send the data through the port.

bool BroadcastMessage(std::vector<BYTE> message)
{
	SOCKET sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
	if (sock == INVALID_SOCKET) {
		printf("Socket creation failed.\n");
		WSACleanup();
		return false;
	}

	BOOL broadcast = TRUE;
	if (setsockopt(sock, SOL_SOCKET, SO_BROADCAST, (char*)&broadcast, sizeof(broadcast)) < 0) {
		printf("Error in setting Broadcast option.\n");
		closesocket(sock);
		WSACleanup();
		return false;
	}

	sockaddr_in broadcastAddr;
	broadcastAddr.sin_family = AF_INET;
	broadcastAddr.sin_port = htons(9); // Use your desired port
	broadcastAddr.sin_addr.s_addr = INADDR_BROADCAST;

	if (sendto(sock, (char*)(message.data()), message.size(), 0, (sockaddr*)&broadcastAddr, sizeof(broadcastAddr)) < 0) {
		printf("Broadcast message send failed.\n");
		closesocket(sock);
		WSACleanup();
		return false;
	}
	closesocket(sock);
	return true;
}

Downloading the Executable

I’ve made the executables for both the C++ and the C# code available for download on GitHub. You will find it in the bin folder. There are both signed and unsigned versions available. I made a signed version because one of the computers I intend to use this on is a corporate managed machine that gives less trust to unsigned executables. I can avoid some headaches and paperwork by having a signed executable.

Program Invocation

Not shown in the above source code is that the C++ and C# programs can both accept the MAC address from the command line. Invoking the program with the mac address as the argument to the program will result in it sending a WOL signal to that MAC address. More than one MAC address can be passed to the program.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

iPhone Photo Cage

I get questions about the case on my iPhone frequently enough such that I thought I would write about it so that I have an answer that I can point people to. The case on my iPhone is different than others in that it is made of metal (aluminum, I believe) and has 1/4 inch threaded screw holes for attaching photo accessories. Without anything additional, I can attach it to a tripod in any of the 4 orientations.

my iPhone in a Cage. Note that the back of the iPhone has a sticker on it that reflects the interior.

The cases I have are for the iPhone 13 and the iPhone 13 Pro. Variations of the cases are available for other iPhones too. Though the cases get redesigns with each iteration and don’t look alike. If you’d like to find one for your phone, here are some links. Note that these are Amazon affiliate links. I earn a small commission if you purchase through one of these links.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Breaking a String by Visual Width in JavaScript

Breaking a string by character count is trivially easy in JavaScript. I needed to break a string based on the visual width though. There are challenges here from each character in a string having a unique width depending on the caracter, font weight, font size, and other settings. I needed to do this for a project. While I’m not proud of the solution, it works, and is something born out of working with what I have. In some other environments, there are functions for information on font metrics that would be assistive. But these are not present in JavaScript (though the Canvas APIs have something that comes close, they don’t help here because they can’t take CSS into account).

The solution that I used took a string abd broke it into words. Each word was added to an element one at a time. I wrapped each word in a <span/> tag. The offsetWidth and offsetHeight on the span elements indicate how much space it is take up. I also had to wrap spaces in <span/> tags. Each time I added a word or space to a parent element, I measured the width to see if I had exceeded some maximum tolerate width. If that width hasn’t been exceeded, I keep going. If it has, I remove the last word that was added and grab all the other words and save them. They are all the words I could fit on that line. The word I removed from the string is then used to start a new string. The process repeats.

A parent element is needed for this process so that the string can inherit display settings during measurement. This could be a zero opacity parent element or something positioned offstring to ensure that it doesn’t get displayed. Though in my testing, this process happens fast such that the string is never displayed while being processed.

I don’t like adding things to the DOM for the sake of just getting a measurement. Some part of me has concerns about side effects of adding and removing items from the DOM, such as exacerbating the effects of some bug that might be present or increasing the frequency of garbage collection cycles. But right now, this is the best solution that I see.

function BreakAtWidth(text,parentElement, maxWidth) {
    if(maxWidth == null) {
        maxWidth = 80;
    }
    if(typeof parentElement == 'string') {
        parentElement = document.getElementById(parentElement);
    }
    var tempChild = document.createElement('span');
    tempChild.style.opacity = 0.0;
    parentElement.append(tempChild);
    var textParts = text.split(' ');
    var elementParts =[]; 
    var elementPartsCombinedString = '';
    var brokenParts = [];
    textParts.forEach(element => {
        elementParts.push(`<span>${element}</span>`);
        elementParts.push(`<span> </span/>`)
    });

    for(var i=0;i<elementParts.length;++i) {
        elementPartsCombinedString += elementParts[i];
        tempChild.innerHTML = elementPartsCombinedString;
        const width = tempChild.offsetWidth;
        if(width >= maxWidth) {
            var resultString = elementPartsCombinedString.substring(0, elementPartsCombinedString.length - elementParts[i].length);
            if(resultString == '') {
                brokenParts.push(elementPartsCombinedString);
                elementPartsCombinedString = '';
            }
            else {
                brokenParts.push(resultString);
                elementPartsCombinedString = elementParts[i];
            }
        }
    }
    if(elementPartsCombinedString != '') {
        brokenParts.push(elementPartsCombinedString);
    }
    
    var cleanStringList = [];
    brokenParts.forEach(part=> {
    cleanStringList.push(part.replaceAll('<span>','').replace('span/>',''));
    }) ;
    tempChild.remove();
    return cleanStringList;
}

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Tesla Super Charging More Open to Other Vehicles

I woke up this morning to a listing that was on the Chevrolet website for a Tesla Super Charger adapter that looks a lot like the Lectron Vortex. I wish I had taken a screenshot, because the page is no longer there. But seeing it was all that was necessary to motivate me to try something out. I drove to a Super Charger on the way to work to see if I could charge my Bolt EUV on it. Until now, the only vehicles that could use Tesla Super Chargers were the Tesla vehicles themselves along with vehicles from Rivian and Ford. Just before the original expected announcement for GM vehicles, Elon Musk fired the entire Super Charger team. That may have affected the rollout.

I have a couple of Super Charger adapters, including the Lectron Vortex. That’s the one that I tried out at the Super Charger. A word of warning though, the first time I used this adapter, the retention string was a bit strong and I had a hard time removing it. I almost abandoned it! But since that first experience, I’ve not had any further problems. If you find your adapter stuck, I have a post about removing it.

After connecting the adapter and the charger to my car, I opened the Tesla app, selected my charger, and that was it, the car was charging. My car already had plenty of charge. I was only testing to make sure that current would flow. I don’t have much to say about how long it took. I will comment on the cable length. The Tesla cables are short! Teslas all have their adapters on the driver’s side rear tail light. The cables are just long enough to reach the tail light. My port is on the side of the car right in front of the driver’s door. To charge, it is necessary to double-park. The Tesla website instructs people to park this way to make the cable reach. Newer chargers have longer cables.

Key to using a super charger is having an adapter. They’ve tended to be in short supply. The Tesla website states one should only use OEM adapters. To date, the only OEM adapter are the one’s made by Tesla for Ford, which feel to be in a perpetual back order state. I used the Lectron Vortex. It has an appearance that looked identical to what was on the Chevy site (minus branding). These are available on Amazon (affiliate link) or directly from Lectron.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Twitter:ย @j2inet

Using SSH with Git Hub

With GitHub being the most popular git based repository (but not the only one) I write this for GitHub. But the procedure is pretty much the same on other git repositories, with the differences being in the web interfaces. If you haven’t already, consider using SSH keys for accessing your git repositories. They provide a more secure authentication method than using passwords. Each of your computers can have a different key. If the keys were somehow compromised, you could revoke the key for that compromised computer without affecting the other computers.

To use get with SSH keys, you need a public/private key pair. To create a keypair, use the following command. Make sure that you use your own email address.

ssh-keygen -t ed25519 -C "user@domain.com"

After typing this command in, you’ll be asked to enter a key phrase for the key. This of this as the password. If you forget it, there is no way to recover it. The output from this command looks similar to the following.

Generating public/private ed25519 key pair.
Enter file in which to save the key (C:\Users\ThisUser/.ssh/id_ed25519):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in C:\Users\ThisUser/.ssh/id_ed25519.
Your public key has been saved in C:\Users\ThisUser/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:9aglYQliSNTdn6AkDDAlchq+uC3lb43lg4yNgE8TG2s user@domain.com
The key's randomart image is:
+--[ED25519 256]--+
|=====...         |
|o...o.o.o.       |
|.. . = .+o..     |
|. +   .. ooo     |
|o===    X o .    |
|o==   .+++       |
|o=o= *  *        |
| .+.* +          |
|   ..  .         |
+----[SHA256]-----+

This creates two files named id_ed25519 and id_ed25519.pub. The file that ends with .pub is the public key and is shared with any entity that needs to authenticate you. The file without an extension is the private key. That is not to be shares. Open the .pub file and cop it’s contents to your clipboard. You are going to need it in a moment.

Login to github.com and go to your account settings. To get there, click on your profile icon in the upper-right corner, select settings. From the menus on the left, select SSH and GPG Keys. Here, your SSH public keys will be listed (if you have any). Select the option New SSH Key. You’ll need to enter a name for the key (here, I entered the name of the computer that the key is associated with) , select a Key Type (Choose Authentication Key), and paste the key into the text box. Select Add SSH Key to save the key. The view will refresh and show your new key in the list.

To use the key, after you’ve selected a git repository, in the clone option for the repository is the option to use an SSH key.

A git repository showing the SSH URL.
The SSH cloning option for a repository.

To clone, just use that URL as a parameter to the clone command.

git clone git@github.com:j2inet/CppAppBase.git

You’ll be prompted for the paraphrase. The cloning experience feels the same way as it does when using a password. If you didn’t set a paraphrase for your key, then you won’t be prompted for a password.

PS C:\shares\projects> git clone git@github.com:j2inet/CppAppBase.git
Cloning into 'CppAppBase'...
Enter passphrase for key '/c/Users/User/.ssh/id_ed25519':
remote: Enumerating objects: 594, done.
remote: Counting objects: 100% (100/100), done.
remote: Compressing objects: 100% (70/70), done.
remote: Total 594 (delta 40), reused 75 (delta 25), pack-reused 494 (from 1)
Receiving objects: 100% (594/594), 82.72 MiB | 3.43 MiB/s, done.
Resolving deltas: 100% (281/281), done.
PS C:\shares\projects>

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Signing Code With a Code Signing Certificate

Code signing is a great way to detect if an exe or dll has had been modified by someone other than the person that created it. Some systems give slightly higher trust to code that has been signed. I’ve finally gotten my own code signing certificate after a frustrating experience with a software security system generated a security event after I created helloworld.exe to test a compiler (among other issues). The security software tends to be less agressive when there is an atestment for the source of the executable. When

Summary

  • The code signing certificate is stored on a physical secure element.
  • Physical access to the token is needed to sign code. Treat it as any device that contains secure informatio.
  • Individuals probably want an OV code signing certificate. Larger corporations probably want an EV certificate.
  • Code signed with this certificate becomes tied to the ID of the signing organization.

My Selected Certificate Issuer

I selected “Sectigo” as my certificate issuer. Part of the reason for my choice was that when I checked other issuers, I found that they were actually resellers for other issues. The reseller prices were understandably a bit higher than going direcly to an issuer.

Types of Code Signing Certificates

There are two types of code signing certificates. Both certificates are associated with some level of verification. The lower level verification certificates are cheaper and require that less information be verified (This is what I purchase). For this certificate, verification of the name of the person or entity that is requesting the certificate is verified. The ID of the person making the request is verified along with ensuring they are authorized to act on behalf of the entity. There are also phone number and email verification. For, the most difficult part of the verification was the phone number verification. The certificate issuer tries to verify the phone number against public records. But if the phone number isn’t on some public record, they will accept a letter from one’s accountant or lawyer that ties this information to the other factors. I used a legal professional for phone verification.

Extended verification looks at additional factors such as an entity’s DUNs number and demand deposit account. If you don’t have these or are not familiar with what they are, you probably don’t want to go this route. For developers of lower-level applications such as system drivers will want this level of trust though. Windows adds a lot of friction to installing certain types of drivers that are not

Once I completed verification, the key shipped out pretty darn fast. I completed my verification around 12:30pm on one day, and the certificate was delivered to my house by 12:00 the next day.

Physical Token

Part of the security of modern code signing certificates is that they are shipped on a physical device. This is a requirement that Microsoft imposes for security reasons. If the certificate isn’t saved on a drive where someone can get to it, then it can’t be compromised. To get the key, someone would need to steal the physical device.

At first glance, one might confuse this token with a USB storage drive. It’s not. Once I connected it to my computer, it became apparent what my security token is. It is a smart card with a USB interface. I’ve made a couple of post about SmartCards (AKA JavaCards). Among other capabilities, JavaCards are able to store and security certificates and perform encryption and decryption without making the encryption keys themselves visible. They implement anti-tampering logic. Experimenting or attempting to circumvent the security can result in the device being permanently disabled. The token is protected with a couple of passwords. There is an administrator password and a user password. The user password for my token was made available to me via e-mail when the token was shipped. The user password can be changed. The administrative password is never made available to the end user. When connected to the computer, the device manager as a “Smart card reader.”

For Signtool to communicate with the token, an additional software item is necessary. For the token that I am using, this is the “SafeNet Authentication Client.” The download URL for the client was provided from my issuer in the same e-mail in which they shared the password. After installing the software, one of the first actions I took was to change the user password on the token from the random string of characters that had been assigned to it to something I can remember. The user password is necessary every time code is signed.

Signing and Timestamping the Code

For a DLL or an EXE that you wish to sign, you will use signtool.exe The two commands that you might want to remember for it are how to sign code, time stamp code, and verify signing.

Signing code

For signing code, you will invoke the executable with the sign argument. You’ll need to indicate that the SHA256 algorithm is being used (/fd SHA256). Optionally, you might want to communicate to the tool that it can automatically select the most appropriate certificate for signing (/a).

signtool sign /fd SHA256 /a .\helloworld.exe

After the code is signed, you will want to timestamp the code. The timestamp command requires a URL to a time server.

signtool timestamp /tr http://timestamp.digicert.com /td SHA256 .\helloworld.exe

If you want to view the information on a signed executable, within the Windows Explorer the information shows when you right-click on it and select “Properties.” A “Digital Signatures” tab will be present and show the name of the signner and the timestamp.

Windows properties windows showing two additional details paines with information on the certificates used to sign an exe.

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Identifying the Bank that Issued a Card

In exploring aspects of Java Card programming, I’ve been using some of my older bank cards. A question that came to mind is how does one identify the issuer of a card. I didn’t see that information explicitly encoded into the information that I could retrieve from the card. I did find the information. It is implicitly there. It can be inferred from the first few numbers of the account number that the card returns. Generally at least the first 6 numbers are needed to identify the bank. Though there are a few banks that only need 4 digits, and a few that need as much as 8 to identify. These prefixes are called Bank Identification Numbers (BIN).

There are a few reasons why someone might want to identify the card issuer. There have been promotionals from agreements between vendors and banks where the vendor gives a discount or early access to those that purchase with a card from a specific bank. There are also some banks associated with higher rates of charge-backs. One might want to take additional precautions in those instances. Card issuers also tend to be associated with a specific country. A card number could be used for establishing presence within some country.

Inferring the issuer is just a matter of finding the entry in a list of prefixes that matches the prefix of the account number in question. I’ve made two code examples available for doing this. One is written in C#, the other in C++. You can find them both in GitHub at this URL: https://github.com/j2inet/binLookup. For both code examples, I’ve included a list of prefixes and banks in embedded resources. This minimizes the number of files that must be as the executable is moved around. It also prevents a casual user from doing something that may be damaging to the data.

You can take a look at the list itself at https://github.com/j2inet/binLookup/blob/main/CS/BinLookup.net/resources/binlist.csv. I’d suggest searching in this document for the first few numbers of your own account number to see if it is in this list.

I’ve searched, but I’ve not found that there exists a unified updated list of Bank Identification Numbers. The source from which I acquired these said that he found them in the WayBackMachine that attributed the entry to an old, now deleted Wikipedia article. This information is only in furtherance of a code exmple, and is not to be relied upon for anything beyond example.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Compiling OpenSSL for JavaCard on Windows with Visual Studio

I needed to compile OpenSSL on Windows in preparation for some JavaCard work. While OpenSSL can be compiled with a range of compilers, I wanted to specifically use Visual Studio because that is the compiler that is generally available on my machines. In addition to Visual Studio, I also needed a build of perl installed and available in the system path. After cloning OpenSSL and navigating to the root of the repository, the next step is generally to configure the build. For the JavaCard tools, a 32-bit version of OpenSSL is needed. I ran into some problems initially with part of the build process targeting 64-bit architecture. To prevent this from happening, some environment variables can be set to ensure the 32-bit version of the tools is used. Visual Studio provides a batch file for setting these environment variables that we can use. Below is the path at which I found this batch file. For you, it may vary depending on the edition of Visual Studio that you have.

C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars32.bat

Open a command terminal and run this batch file. Then you can start the build process. To configure the build process for 32-bit Windows with the options needed for the JavaCard environment, use the following command from the repository root.

perl Configure VC-WIN32 no-asm no-threads enable-weak-ssl-ciphers

If you wanted to make a general build, you could omit most everything after VC-WIN32. For a 64-bit build, use `VC-WIN64A.

Now for the long part. After this next command, if you were planning on making coffee or having a quick bite to eat, a good time is about to present itself. From the root of the repository, run the following command.

nmake

If you come back and find that the build process has terminated with a complaint about mixing 32-bit and 64-bit code, then that means that the system is using the 65-bit version of the tools. This will happen if you forgot to run the batch file that I mentioned earlier. If you would like to run the unit tests for OpenSSL, use the following command.

nmake test

This process also takes a significant amount of time. When it completes, the last step is to install OpenSSL. This command will likely fail unless you open an instance of the Visual Studio command prompt with Administrative priviledges.

nmake install

This command will place OpenSSL within c:\Program Files\OpenSSL. The executables themselves are in c:\Program Files\OpenSSL\bin.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Unresolved external symbol WKPDID_D3DDebugObjectName (LNK2001)

I opened an old Direct3D program and tried recompiling it, only get the error LNK2001 Unresolved external symbol WKPDID_D3DDebugObjectName. This is obviously an error from a definition for an element missing. I checked the source code and I saw that the object of interest was defined in d3dcommon.h. Confusing at first, but I finally realized that for the object to be resolved by the linker I needed to include dxguid.lib in the project. There are a few ways to link to a library. I prefer to explicitly link in source code instead of linking in the project settings. In one of my sources files, I only needed to include the following.

#pragma comment(lib, "dxguid.lib")

I only need this file linked when I am compiling in debug mode. A conditional compilation statement wrapped around this will take care of making it conditionally linked.

#if defined(_DEBUG)
#pragma comment(lib, "dxguid.lib")
#endif

With that change, the program compiles and the error has gone away!

For those curious, the D3D program in question is something I have appended in the C++ Application Base Class project. One day I intend to make a base class for a D3D program base class to go along with the D2D base class. The beginnings of my experimentation for it are within that project.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

C++ Custom Deleters

Some organizations and entities (including the White House) have advised against using C/C++ and use memory safe languages with memory safe features instead. While I can understand the motivation for such encouragement, realistically complete abandonment of the language isn’t practical. Managing low-level resources in other languages can both be cumbersome and doesn’t necessarily insulate someone from resource leaks. There are not always higher-level libraries available for functionality that one wishes to use; they may have to build a library themselves and embrace management of those low level resources. But that said, when writing code in C++, one can use safer approaches to doing so. One approach is to use std::shared_ptr<T> instead of using pointers directly.

Shared pointers implement reference counters and will delete the underlying memory once that reference count reaches zero. This is a feature that is often common in some other high level languages. Instead of using the new and delete commands to allocate and release memory, one could use std::make_shared. For other blocks of data for which you might have manually allocated memory, you could use other standard template library classes, such as using a std::vector instead of an array.

Sometimes a resource in question was allocated by the operating system. It may be up to the developer to manage the release or deletion of the object. These can still be managed with std::shared_ptrs<T> objects. Let’s take a look at a simple program that reads a program into a buffer,

#include <iostream>
#include <Windows.h>


const DWORD64 MAX_FILE_SIZE = 64 * 1024;//64 kilobytes


int main(int argc, char** argv)
{
	if (argc < 2)
	{
		std::wcout << L"Usage: ShowFileContents <filename>" << std::endl;
		return 1;
	}
	std::string filename = argv[1];
	std::wstring wfilename = std::wstring(filename.begin(), filename.end());
	HANDLE hFile = CreateFile(wfilename.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, NULL, NULL);
	DWORD fileSizeHigh, fileSizeLow;
	DWORD64 fileSize =  -1;
	DWORD bytesRead = -1;

	fileSizeLow = GetFileSize(hFile, &fileSizeHigh);
	fileSize = ((DWORD64)fileSizeHigh << 32) + fileSizeLow;
	if (fileSize > MAX_FILE_SIZE)
	{
		std::wcout << L"File is too big to read" << std::endl;
		CloseHandle(hFile);
		return 1;
	}
	std::wcout << L"File size: " << fileSize << std::endl;
	char* buffer = new char[fileSize + 1];
	ZeroMemory(buffer, fileSize + 1);
	ReadFile(hFile, buffer, fileSize, &bytesRead, NULL);
	std::wcout << L"File contents: " << std::endl;
	std::wcout << buffer << std::endl;
    delete buffer;
	CloseHandle(hFile);

	return 0;
}

There first thing I see that can be replaced is a call to the new and delete that could be removed. I’ll replace the use of this buffer with a vector<T>. Since I am using a vector, I don’t need to explicitly allocate and deallocate memory. Instead, I can specify how much memory is needed in its declaration. When the std::vector falls out of scope, it will be deallocated automatically. I do make use of a pointer to the vector’s memory. It is accessible through the method std::vector<T>::data(). The ReadFile method needs a pointer to the memory in which it will deposit its data. That’s provided by way of this method.

There is also a HANDLE variable used for managing the file. It is named hFile. I’ve written on wrapping these in unique pointers before. You can read about that here. In that post, I implemented a Functor that contains the definition for how the handle is to be deleted. Rather than manually ensure I associate the functor with the shared pointer, I had also made a function that would handle that for me to ensure it is done the same way every time. This can also be used with a std::shared_ptr<T>. Though you should generally only do this if you really need to share the resource with more than one object. On a unique pointer, the deleter is part of the object type. On a shared pointer, the deleter is not part of the type, but is stored in instance data for the pointer. I’ll replace my usage of CreateFile (the Win32 function) with wrapper function that returns the handle as a std::shared_ptr. That wrapper function looks like this.

using HANDLE_shared_ptr = std::shared_ptr<void>;

HANDLE_shared_ptr CreateFileHandle(
	std::wstring fileName, 
	DWORD dwDesiredAccess = GENERIC_READ, 
	DWORD dwShareMode = FILE_SHARE_READ, 
	LPSECURITY_ATTRIBUTES lpSecurityAttributes = NULL, 
	DWORD dwCreationDisposition = OPEN_EXISTING, 
	DWORD dwFlagsAndAttributes = 0, 
	HANDLE hTemplateFile = NULL)
{
	//std::shared_ptr<HANDLE> x = nullptr;
	HANDLE handle = CreateFile(fileName.c_str(), dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile);
	if (handle == INVALID_HANDLE_VALUE || handle == nullptr)
	{
		return nullptr;
	}
    return 	std::shared_ptr<void>(handle, HANDLECloser());	
}

In the following, you can see the new implementation of my main() method. Notice that in the ReadFile method for the std::shared_ptr<T> that I am calling its get() method to pass the HANDLE value to the function. I’m nolonger explicitly invoking the call to CloseHandle(). Instead, when the main() method returns, the deleter will be invoked indirectly. If you set a breakpoint on it you’ll see when this happens.

int main(int argc, char** argv)
{
	DWORD fileSizeHigh, fileSizeLow;
	DWORD64 fileSize = -1;
	DWORD bytesRead = -1;
	if (argc < 2)
	{
		std::wcout << L"Usage: ShowFileContents <filename>" << std::endl;
		return 1;
	}
	std::string filename = argv[1];
	std::wstring wfilename = std::wstring(filename.begin(), filename.end());

	auto fileHandle =  CreateFileHandle (wfilename.c_str());


	fileSizeLow = GetFileSize(fileHandle.get(), &fileSizeHigh);
	fileSize = ((DWORD64)fileSizeHigh << 32) + fileSizeLow;
	if (fileSize > MAX_FILE_SIZE)
	{
		std::wcout << L"File is too big to read" << std::endl;
		return 1;
	}
	std::wcout << L"File size: " << fileSize << std::endl;
	std::vector<char> buffer(fileSize / sizeof(char) + 1, 0);
	ReadFile(fileHandle.get(), buffer.data(), fileSize, &bytesRead, NULL);
	std::string bufferText = std::string(buffer.begin(), buffer.end());
	std::wcout << L"File contents: " << std::endl;
	std::cout << bufferText << std::endl;

	return 0;
}


You’ll see use of this soon in an upcoming post on SmartCards. The code examples for it make Windows API calls to the Smart Card functions. I’ll be making use of shared pointers with deleters for managing the resources in that project.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

A Quick Introduction to Cosmos DB in C#

This is for those that need to start getting productive within Cosmos DB in a hurry. There’s a lot that could be discussed, but I think you’ll first want to be able to setup a local development environment and get to writing data to it, and reading that data. In this walkthrough, I’ll show how to make a connection to your local/development instance of Cosmos DB. Configuring a production connection is a little more involved. If you are trying to get started, that isn’t an immediate concern. I won’t cover it here. I’ve focused specifically on C# instead of targeting multiple languages to keep this shorter. Let’s get started. You need to first install the Cosmos DB emulator.

Installing the Cosmos DB Emulator

You can download the Cosmo DB Emulator from Microsoft at this address: https://aka.ms/cosmosdb-emulator. You can start the emulator from the command line. For ease of starting it, I would suggest adding the program’s path to your path environment variable. Once installed and the path updated, you can start the Cosmo DB Emulator with the following command.

Microsoft.Azure.Cosmos.Emulator.exe

By default, it will run on port 8081. If you would like to run this on a different port, use the /port=[port number] parameter. Once the emulator is running, you can the contents of the emulator at the URL https://localhost:8081/_explorer/index.html. This view will show you information for connecting to the emulator instance. Note that the information shown here will be the same on every computer on which you run this. The emulator is only for testing and not for production. The emulator accepts communication over TLS. For this purpose, the installation of emulator also resulted in the installation of a development certificate for encrypting the TLS traffic.

Create a new project and either use the package manager to add a reference to Microsoft.Azure.Cosmos. If you are using the command line to manage your project, from your project directory use the following command.

dotnet add package Microsoft.Azure.Cosmos

Creating the Database and Container in C#

With that in place, we can get into the code. start by adding a using statement for the code.

using Microsoft.Azure.Cosmos;

There are a few objects that we need to create. We need to create a client object for connecting to the database, the database object itself, and containers within the database. If you are familiar with traditional databases, then containers are similar to table. When we create the container, the values put in it will have an identifier value in a field named id. We also identify a field on which the data will be logically grouped/partitioned. For my example, I’m using an employee database and am parittioning on the department to which someone is assigned. The partition key identifier is expressed in what looks like a file path. But insted of a path in nested folders, this would be a field in what could be nested objects. If the partition key is at the root of the object, then this path will look like a file path to a file that is in the root directory.

For our local test, we will be using a resource key that is the default for any local instance of the Cosmos DB service. We would **not** be using this in a production environment. But for our local tests, it is fine.

const string RESOURCE_TOKEN = "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==";
using CosmosClient client = new(accountEndpoint: "https://localhost:6061", authKeyOrResourceToken: RESOURCE_TOKEN);
Database database = await client.CreateDatabaseIfNotExistsAsync("employeeDatabase");
Container employeeContainer = await database.CreateContainerIfNotExistsAsync("employeeContainer", "/id");

If we run the above vode and then open our browser to the Cosmos DB explorer, we will find that there’s a new container named employeeContainer. Though, there is no data in it.

Adding an Item to the Container

We could add an item to the table with only one or two more statements of code. To put an object into the container, we can create and initialize an object. Then we UpSert it into the database.

var item = new Employee()
{
    ID = Guid.NewGuid(),
    Name = "Joel Johnson"
};
await employeeContainer.UpsertItemAsync(item);

Now if we run the code and look in the Cosmos DB explorer, we will see our item. In addition to the fields from the public elements of our object, there are some additional fields prefixed with an underscore (_) that have additional metadata on our object, such as the time stamp (_ts) and etag (_etag).

Reading an Item from the Container

If we wanted to retrieve a specific item from the container and we know its id value and partition key value, we can use the ReadItemAsync<T> method on the container to retrieve the item. This method will deserialize the contents and return our data as an object.

var readValue = await employeeContainer.ReadItemAsync<Employee>(
    id: idvalue.ToString(), 
    partitionKey: new PartitionKey("IT"));

We could also read the item as a stream. Readin gthe item this way will result in all the data associated with the item being read, including the fields that have additional metadata.

var readValue = await employeeContainer.ReadItemAsync<Employee>(
    id: idvalue.ToString(), 
    partitionKey: new PartitionKey("IT"));

var itemStream = await employeeContainer.ReadItemStreamAsync(
                    id: idvalue.ToString(), 
                    partitionKey: new PartitionKey("IT")
);
using ( StreamReader readItemStreamReader = new StreamReader(itemStream.Content))
{
    string content = await readItemStreamReader.ReadToEndAsync();
    Console.WriteLine(content);
}

Querying an Item

You probably don’t know the exact ID of the valu(s) that you want to read. But you may know something about other data for items you want. Ironically, while Cosmo DB is a “NoSQL Database”, it supports queryign with SQL.

using  FeedIterator<Employee> feedIterator = employeeContainer.GetItemQueryIterator<Employee>(
                    queryText: "SELECT * FROM c WHERE c.dept = 'IT'");
while(feedIterator.HasMoreResults)
{
    FeedResponse<Employee> response = await feedIterator.ReadNextAsync();
    foreach(Employee employee in response)
    {
        Console.WriteLine($"Found item {employee}");
    }
}

You wouldn’t want your code to be vulnerable to SQL injection attacks. If a parameter could vary, you don’t want to construct a string. You want to pass the value as parameterized input. In the SQL query, named parameters are prefiexed with the @ symbol. In the above, if we wanted to pass the department as a parameter instead of embedding it in the query, we would use code like the following.

QueryDefinition query = new QueryDefinition("SELECT * FROM c WHERE c.dept = @dept")
    .WithParameter("@dept", "IT");
using FeedIterator<Employee> feedIterator = employeeContainer.GetItemQueryIterator<Employee>(query);

If you are familiar with LINQ, you could also use that to query information also. The container’s GetItemLinqQuerable<T>() method returns an object that you can use for LINQ queries.

var employeeLinqContainer = employeeContainer.GetItemLinqQueryable<Employee>(allowSynchronousQueryExecution: true );
var employeeQuery = employeeLinqContainer
    .Where(e => e.Department == "IT")
    .Where(e => e.DateOfBirth > new DateTime(1970, 01, 01))
    .OrderBy(e => e.Name);
            
foreach(var employee in employeeQuery)
{
    Console.WriteLine($"Found item {employee}");
}

I hope that this was enough to get you started!


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Reading Magnetic Card Data

Magnetic cards have been around for a while. It feels only recently in the past few years that point of sale systems in the USA primarily switched to NFC and chipped cards for payments. But magnetic cards still have lots of usages, such as gift cards or door entry cards. While cleaning up, I found a magnetic card reader that was used for a project a long time ago. I happen to find this at a time where I’m already writing about NFC cards, just wrote about the Luhn’s Check algorithm used in credit card entry, and looking at how to read smart cards. I think it’s fitting to take a glance at reading information from magnetic cards. There are multiple standards for encoding information onto magnetic cards.

The physical unit I found was purchased for part of the functionality that was used in a demo for a concept in a past NRF Conference. I still have some video from the event.

This particular reader appears as an HID device to the computer. When a card is scanned, it generates keystrokes to the computer. For demonstration, I am using a gift card from “Raising Cane’s.” I’ve selected this card because I believe the card is cancelled and thus has no funds on it. But even if it does have funds on it and someone uses it, I suffer no lost since I am not positioned to suffer a lost. I also found an old American Express gift card whose funds have long been exhausted.

Encoding

The data on track 1 of the cards is encoded in 7 bits; 6 bits are data, 1 bit is for parity. This means that only 64 possible characters could be in the encoding. This isn’t UTF-8 or ASCII like you may be acustomed to. The reader does translate from these encoding to the equivalent keystrokes. Data on tracks 2 and 3 (if they exists) have 5 bits per character. Track 1 can have up to 79 characters, track 2 up to 40, and track 3 up to 107 characters.

I’ve got a tables at the end of this post that shows the actual encodings and their classifications. The reader will translate these to ASCII encodings for you. These are primarily included as a curiosity. There is more than one standard for encoding data on magnetic cards. With some of the hotel key cards that I have, I found some of them don’t work with this reader. This is somewhat expected, as a hotel might want to use a format that isn’t as easy to replicate. The varied gift cards, membership cards, and payment cards I tried generally worked fine.

Error Response

I want to start off talking about the error response since people often overlook those. If you were to use a card that the reader cannot process, it will return the string +E?. If you receive this response, the reading failed, possibly because the card is using an encoding that isn’t understood.

Sample Card Scan

Here’s the data that comes back when I read the “Raising Cane’s” gift card. I will refer to it in the following sections.

%B6000205500145033524^GIFT/RAISINGCANES^4211?;6000205500145033524=4211101685485?

Start and End Sentinel

When a card is read, the first character in the data will be a %. The last character will be a ?. If you were making something that were reading card data, you could use these characters to let your program know that it is receiving card data or that the data. Fields on the card are separated with the ^. Though not part of the card data, when my specific card reader has finished reading data, it will also send an enter keystroke. All of the data between the % and ? comes from the card. But there are some more delimiters in that block of data.

Track Delimiters and Field Delimiters

A magnetic card could have multiple tracks in parallel. The card reader returns all three tracks in a single stream of data. But the tracks are delimited with a semicolon(;). A single track could have multiple fields of data. Fields are separated with the caret (^) character on track 1 or an equal (=) character on tracks 2 and 3. Parsing out the sample card read I provided above, we end up with the following. The exact purpose of each of these fields could vary by card.

Track 1 Field 1:B6000205500145033524
Track 1 Field 2:GIFT/RAISINGCANES
Track 1 Field 3:4211?
Track 2 Field 1:6000205500145033524
Track 2 Field 2:4211101685485

Card Class

The very first character read on the card indicates the class/type of card being read. For all of the payment cards (both gift cards and credit cards) that I’ve encountered, this character is a ‘B’. From my readings and reading other cards that I have, I have found the following.

PrefixAssociation
BPayment or Gift Card
GGift Card
MMembership Card
GCGift Card
Card class based on the first character

Credit Card Format

While I found the way data is structured on a card to be variable, when I tried several credit cards, the construction of the data is consistent. Here is a modified data stream from a gift card.

%B377936453080000^THANK YOU                 ^2806521190729520                ?;377936453080000=280652119072952000000?

Breaking it out into fields, we have the following.

FieldPurposeData
Track 1 Field 1Primary Account Number377936453080000
Track 1 Field 2NameTHANK YOU
Track 2 Field 3Expiration Date (2028-06)
Service code (521)
Discretionary Data(190729520)
2806521190729520
Track 2 Field 1Credit Card Number377936453080000
Track 2 Field 2Expiration Date (2028-06)
Service code (521)
Discretionary Data (190729520)
280652119072952000000

You’ll notice that some of the data is on the card twice. I’m not quite sure of the reason for this. Is that for verifying the integrity of the data? To provide an alternative method of reading data for cheaper devices, allowing them to only read one track? This, I don’t know.

Reading the Data in Code

As I said the post on Luhn’s Check, don’t enter a real credit card number on the site where I have sample code posted. Though the site doesn’t actually communicate any data back to any site, I think it is still better to advise you not to enter data. But the site is there for you to examine the code. Feel free to download it to run in a local sandbox (where you can prohibit Internet communication) or view the source code to see how it works. You can find the code at https://j2inet.github.io/apps/magreader. If you would like to use a magnetic card reader that is similar to what I have, you can find it here (affiliate link):

Encoding Tables

BCD Data Format

CharacterHexFunction
00x00DATA
10x01DATA
20x02DATA
30x03DATA
40x04DATA
50x05DATA
60x06DATA
70x07DATA
80x08DATA
90x09DATA
:0x0AControl
;0x0BStart Sentinel
<0x0CControl
=0x0DField Separator
>0x0EControl
?0x0FEnd Sentinel

Alpha Encoded Data

CharacterHexFunction
[space]0x00Special
!0x01Special
0x02Special
#0x03Special
$0x04Special
%0x05Start Sentinel
&0x06Special
0x07Special
(0x08Special
)0x09Special
*0x0ASpecial
+0x0BSpecial
0x0CSpecial
0x0DSpecial
.0x0ESpecial
/0x0FSpecial
00x10DATA
10x11DATA
20x12DATA
30x13DATA
40x14DATA
50x15DATA
60x16DATA
70x17DATA
80x18DATA
90x19DATA
:0x1ASpecial
;0x1BSpecial
<0x1CSpecial
=0x1DSpecial
>0x1ESpecial
?0x1FEnd Sentinel
@0x20Special
A0x21DATA
B0x22DATA
C0x23DATA
D0x24DATA
E0x25DATA
F0x26DATA
G0x27DATA
H0x28DATA
I0x29DATA
J0x2ADATA
K0x2BDATA
L0x2CDATA
M0x2DDATA
N0x2EDATA
O0x2FDATA
P0x30DATA
Q0x31DATA
R0x32DATA
S0x33DATA
T0x3fDATA
U0x35DATA
V0x36DATA
W0x37DATA
X0x38DATA
Y0x39DATA
Z0x3ADATA
[0x3BSpecial
\0x3CSpecial
]0x3DSpecial
^0x3EField Separator
_0x3FSpecial