Plant Timelapse

Photography is among my interests. I decided to experiment with time lapse photography using plants. To create a time lapse video, someone needs a camera on which they can take photos at timed intervals and a software to assemble those photographs together as a video. There are many solutions for doing this. Over time I will try out several different software programs, cameras, setups, and subjects. For this first attempt, I used a GoPro. I have a GoPro 5. It is an older model. Currently, the most recent GoPro version available is a GoPro 10. All of these models have time lapse photography settings built into the device. You choose your camera settings; select the time interval between photographs; aim the camera at your subject; and let it get started. With these cameras you can also specify that you are doing a video and it will assemble the photos into a video for you.

When doing a time lapse shot, you want to leave the setup undisturbed. But you will also want to know how things look so that you can make corrections. To this end, I let the GoPro run for a few hours and stopped it to look at the results. When I did this, I found that my original settings of taking a photo once a second was too frequent. It would fill up the memory card that I was using too fast. I also found that i didn’t like my original angle. I made adjustments, let another test run for another hour, and was content. I set things up and let them run. The results were okay overall, but there was still plenty of room for improvement. The first item of improvement was in the lighting. While aesthetically I liked the look, the light wasn’t sufficient for the plant. In my timelapse you will see that the plants grow up long and skinny. This is something that plants would do while underground with little light exposure. They would do this until they get sufficient light and then transition from growing up to growing out. Because of the insufficient lighting, these plants used a lot of their resources trying to grow up to get more light.

Towards the end of my timelapse, I pulled out one of my DSLRs. (I feel that DSLRs are ancient given the major camera manufacturers have transitioned to mirrorless. But it still works, and I keep using it). I have an intervalometer for my camera. This is a timing device that can be used to trigger camera. I set it up for 10 second interviews, just like the GoPro and let it run during the last day of the 10 days that it took for me to get my time lapse shots. The results were much better. Comparing the two, the DSLR will by my go-to device for time lapse shots. That’s not to say the GoPro is out. The GoPro is much more tolerant to various conditions, especially outdoor conditions. I’ll be using it for some outdoor time lapse shots fairly soon. Though the results will be far off in the future.

One of the issues here is that the lighting conditions that give the photo the look that I want might not be the conditions under which the plant can thrive. I started to imagine solution, and I thought a solution that may work is having a light that turns off of changes brightness in sequence with the photos. Full lighting conditions would be applied most of the time. But the moment just before to just after the shot being taken, the dimmer lighting conditions could be used. I’ve got a DMX controller and thought about using it. But that could be over kill. I thought about using a relay controlling a power source. But after a lot more thinking, I realized I already have a solution. My hue lighting. The Phillips hue lights are controllable via rest calls. I could have a pi dedicated to controlling the lights of interest.

The light switching must be coordinated with the camera. My intervalometer would not work for this. While I could probably get a working time sequence up front, over the course of days the intervalometer and the light sequencing could drift out of sync with each other. I need to have the Pi control the camera too. I’ve written before on controlling Hue lighting from the Pi. I think that could be used here. Now as soon as I get free time from work and other obligations, I’ll be looking into controlling the digital camera from a Pi. Some of the libraries that I’ve looked at appear to be capable of controlling both the traditional DSLRs and the more modern mirrorless cameras.

I’ve gotten some seeds for corn, okra, and peppers planted now. Once they sprout, I’ll start my next time lapse with a more advanced setup.


re_Terminal::Industrial CM4 Case with a Screen

For me the options for adding a screen to a Raspberry Pi have always come with a bit of dissatisfaction. This isn’t because of any intrinsic flaws in the designs. The Pi, having its own thickness which has contributed to solutions that have form factors that are not quite my preference. This started to change with the release of the Raspberry Pi Computer Modules. With the Raspberry Pi Compute Module 4 I see some satisfying solutions. One of the solutions has available a plan for a 3D printable case. Another comes already encased. I chose a solution that already has a case because I don’t have a 3D printer and I’ve had mixed results in using third-party printers. The solution that I selected is the Seeed Studio reTerminal.

Video covering the Seeed Studio reTerminal

Before speaking on it more, I want to point out that this case does not have a battery. If you are seeking a solution with a battery, then you may want to consider the solution with the 3D print designs and alter it to hold a battery.

The unit is sold with a Raspberry Pi Compute Module 4 (CM4) included. Right now, the unit is sold with the CM4 that has Wi-Fi, 4 gigs of RAM, and 32 Gigs of eMMC. This is great, as it is near impossible to get a CM4 by itself in present-day times. The packaging for the unit uses more flexible wording saying “Up to 8GB RAM/Up to 32GB EMMC” suggesting that at some point they may see the unit with other variants. The only indication of what CM4 module is in the box is a sticker with the barcode that spells out the CM4 version (CM4104032).

The display on the unit itself is 720×1280 pixels. It sounds like I said these dimensions reverse. I haven’t. Using the direction in which the pixels are refreshed, the first line of scan is the left area of the screen and it works its way to the right. This differs from conventional displays that start at the top and work their way down. Accessible through the case is gigabit Ethernet, 2 USB 2.0 ports, the Pi 40-pin header, and an industrial high-speed expansion interface. This unit was designed with industrial applications in mind. Though I won’t be paying attention to this industrial interface. The case also has a real-time clock, a cryptographic coprocessor, a few hardware buttons including a button to power the unit on, an accelerometer, and a light sensor. Out of the box the software needed for this additional hardware is preinstalled. Should you choose to reinstall the operating system yourself, you will need to install the software and drivers for the additional hardware manually.

Component Layout Diagram from Seeed Studio

The packaging for the unit contains extra screws, a screwdriver, and the reTerminal unit itself. On the lower side of the unit is a connector for a 1/4-inch screw. This is the same sized screw used by many camera tripods. I’m using one of the mini desktop tripods for my unit. To power the unit on all that is needed is to connect power to the USB-C connector on the left side of the reTerminal.

The unit does not ship with an on-screen keyboard installed. For initial setup, I will want to have at minimum a USB-C power supply and a keyboard. If you do not have a mouse, you could use the touch screen just fine.

reTerminal Specific Hardware

I’ve mentioned a number of hardware items contained within the reTerminal, such as the custom buttons. Accessing the additional hardware and interfaces is easier that I expected. The four buttons on the front have been mapped to the keyboard keys A, S, D, and F. If you would like to map these to different keys, they can be set through /boot/config.txt. Within that file is a line that looks similar to the following.

dtoverlay=reTerminal,key0=0x041,key1=0x042,key3=0x43,key3=0x044

The hex numbers are ASCII codes for the characters these keys will generate. You can change these as needed.

LEDs and Buzzer

There are four positions for LEDS at below the screen of the unit. Two of those positions have LEDs that are controllable through software. The positions are labeled STA and USR. USR has LED 0 (green). Position STA las LEDS 1 (red) and 2(green). Because of the two LEDs behind position STA, the perceived color of the position can range from green to yellow to red. Control of the LEDs is available through the file system. In the directory /sys/class/leds are the subdirectories usr_led0, usr_led1, and usr_led2. Writing a text string with a number between the range of 0 and 255 to a file named brightness will set the brightness of the LED with 0 being off and 255 being full brightness. Note that root access is needed for this to work.

According to documentation, the brightness of the LEDs is changed with this number. But in practice, each led appear to be binary. I don’t see any difference in brightness for a value of 1 and of 255.

The Buzzer is treated like the LED, but only has a “brightness” level ranging from 0 (off) to 1. The name of the device directory for the buzzer is /sys/class/leds/usr_buzzer. Like with the LEDs, write to a file named brightness.

Real Time Clock

The real-time clock is connected to the I2C interface of the CM4. The command line utility hwclock works with the clock.

Light Sensor

The light sensor is exposed through the file path /sys/bus/iio/devices/iio:device0. Reading from this file will expose the brightness value.

Accelerometer

The accelerometer in the unit is a ST Microelectronics LIS3DHTR. This hardware can be used to automatically change the screen orientation, or for other applications. To see it in action, you can use the evtest tool that Seeed Studio preinstalled on the device. Running evtest and selecting the index for the accelerometer hardware will result in it displaying the readings for each axis.

My Setup

As per my usual, after I had the Pi up and running there were a few other changes that I wanted to apply.

Testing the Hardware

For testing much of the above-mentioned hardware, root access is needed. I would prefer to avoid using root access. I first tried to grant permission on the needed files to the user pi. Ultimately, this doesn’t work as planned. The file paths are sysfs paths. This is part of a virtual file system used for accessing hardware. It gets recreated on each reboot. Changes made do not persist. But if you wanted to grant permissions that are available until the next reboot, you could use the following. Otherwise, you’ll need to run your applications that use this additional hardware as root.

#enter interactive root session
sudo -i
#navigate to the folder for LEDs and the buzzer
/sys/class/leds/
#grant permission to the pi user for the brightness folder
chown pi usr_led0/brightness
chown pi usr_led1/brightness
chown pi usr_led2/brightness
chown pi usr_buzzer/brightness

#grant permission to the light sensor
chown pi /sys/bus/iio/devices/iio:device0

#exit the root session
exit

Some of the hardware uses the SPI and I2C interfaces. Using the Raspberry Pi Config tool, make sure that these interfaces are enabled.

Install the tool for input event viewing. The tool is named evtest.

sudo apt-get install evtest -y

Once installed, run evtest. Note that even if you are using SSH to enter commands into your terminal that this tool still works. The tool will list the input devices and prompt you to select one.

 $ evtest
No device specified, trying to scan all of /dev/input/event*
Not running as root, no devices may be available.
Available devices:
/dev/input/event0:      Logitech K400
/dev/input/event1:      Logitech K400 Plus
/dev/input/event2:      Logitech M570
/dev/input/event3:      gpio_keys
/dev/input/event4:      ST LIS3LV02DL Accelerometer
/dev/input/event5:      seeed-tp
/dev/input/event6:      Logitech K750
/dev/input/event7:      vc4
/dev/input/event8:      vc4
Select the device event number [0-8]: 3

The actual order and presence of your options may vary. In my case, you can see the devices associated with a Logitech Unify receiver that is connected to the device. The hardware buttons that are on the device are represented by the device gpio keys. For me, this is option 3. After selecting three, as I press or release any of these buttons, the events print in the output. Remember that by default these buttons are mapped to the keys A, S, D, and F. This is reflected in the output.

Input driver version is 1.0.1
Input device ID: bus 0x19 vendor 0x1 product 0x1 version 0x100
Input device name: "gpio_keys"
Supported events:
  Event type 0 (EV_SYN)
  Event type 1 (EV_KEY)
    Event code 30 (KEY_A)
    Event code 31 (KEY_S)
    Event code 32 (KEY_D)
    Event code 33 (KEY_F)
    Event code 142 (KEY_SLEEP)
Properties:
Testing ... (interrupt to exit)
Event: time 1651703749.722810, type 1 (EV_KEY), code 30 (KEY_A), value 1
Event: time 1651703749.722810, -------------- SYN_REPORT ------------
Event: time 1651703750.122811, type 1 (EV_KEY), code 30 (KEY_A), value 0
Event: time 1651703750.122811, -------------- SYN_REPORT ------------
Event: time 1651703750.832809, type 1 (EV_KEY), code 31 (KEY_S), value 1
Event: time 1651703750.832809, -------------- SYN_REPORT ------------
Event: time 1651703751.402797, type 1 (EV_KEY), code 31 (KEY_S), value 0
Event: time 1651703751.402797, -------------- SYN_REPORT ------------
Event: time 1651703751.962817, type 1 (EV_KEY), code 32 (KEY_D), value 1
Event: time 1651703751.962817, -------------- SYN_REPORT ------------
Event: time 1651703752.402812, type 1 (EV_KEY), code 32 (KEY_D), value 0
Event: time 1651703752.402812, -------------- SYN_REPORT ------------
Event: time 1651703753.132807, type 1 (EV_KEY), code 33 (KEY_F), value 1
Event: time 1651703753.132807, -------------- SYN_REPORT ------------
Event: time 1651703753.552818, type 1 (EV_KEY), code 33 (KEY_F), value 0
Event: time 1651703753.552818, -------------- SYN_REPORT ------------

Since we are speaking of evtest, exit it with CTRL-C and run it again. This time select the accelerometer. A data stream of the accelerometer values will fly by. These are hard to visually track in the console. But if you reorient the device, if you are able to track one of the readings, you will see it change accordingly.

Event: time 1651705644.013288, -------------- SYN_REPORT ------------
Event: time 1651705644.073140, type 3 (EV_ABS), code 1 (ABS_Y), value -18
Event: time 1651705644.073140, type 3 (EV_ABS), code 2 (ABS_Z), value -432
Event: time 1651705644.073140, -------------- SYN_REPORT ------------
Event: time 1651705644.133259, type 3 (EV_ABS), code 1 (ABS_Y), value 18
Event: time 1651705644.133259, type 3 (EV_ABS), code 2 (ABS_Z), value -423
Event: time 1651705644.133259, -------------- SYN_REPORT ------------
Event: time 1651705644.193161, type 3 (EV_ABS), code 0 (ABS_X), value 1062
Event: time 1651705644.193161, type 3 (EV_ABS), code 1 (ABS_Y), value 0
Event: time 1651705644.193161, type 3 (EV_ABS), code 2 (ABS_Z), value -409
Event: time 1651705644.193161, -------------- SYN_REPORT ------------
Event: time 1651705644.253290, type 3 (EV_ABS), code 0 (ABS_X), value 1098
Event: time 1651705644.253290, type 3 (EV_ABS), code 2 (ABS_Z), value -405
Event: time 1651705644.253290, -------------- SYN_REPORT ------------

Light Sensor

Getting a value from the light sensor is as simple as reading a file. From the terminal, you can read the contents of a file to get the luminance value.

cat /sys/bus/iio/devices/iio:device0/in_illuminance_input

HDMI and Screen Orientation

I earlier described the screen as having a resolution of 1280×720. That isn’t quite correct. It is 720×1280. It might look like I just reversed the numbers. I did. Typically screens refresh from top to bottom. This screen refreshes from left to right. You can check this for yourself by grabbing a window and moving it around rapidly. Some screen tearing will occur exposing the way in which the screen is rendering. If you were to make a fresh install of Raspbian or Ubuntu, this will be more apparent because the screen will be oriented such that the left edge is the top of the screen and the right edge is the bottom. If you would like to manage the orientation of the display and external displays that the reTerminal is connected to, there is a screen orientation utility to install for managing the layout.

sudo apt-get install arandr -y

Future Expansion

I don’t give much weight to plans for future products in general since there is no guarantee that they will materialize. But I’ll reference what Seeed Studio has published. The “Industrial High-Speed Interface” connects to a number of interfaces on the CM4. This includes the interfaces for PCIe, USB 3.0, and SDIO 3.0. Seeed Studio says that it plans to make modules available for connecting to this interface, such as a camera module, speaker and mic array. PoE, 5G/4G modems, so on.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Remote Desktop on the Pi

While I enjoy being productive on my pi over SSH, there are times when I need to access the desktop environment. Rather then be bound to the display on which the Pi is connected (if it is connected to one, some of my Pis have no display) I decided to setup Remote Desktop on the Pi. Most of the computers that I use are Windows machines and already have a remote desktop client. (Note: another option is VNC). I did this for my Jetsons as well. While the same instructions often work on both the Jetson and the Pi, this is not one of the situations where that was the case. I have another entry coming on how to perform the necessary steps on the Jetson.

On the Pi, there are only a few steps needed. Like many installations, start with updating your packages. Open a terminal and enter the following commands.

sudo apt-get update
sudo apt-get upgrade

This could take a while to run depending on how long it has been since you last ran it and how many packages that there are to install. After it completes, use the following commands to install a remote desktop server to your Pi.

sudo apt-get install xrdp -y
sudo service xrdp restart

Once the installation is done, you need to get your Pi’s IP address.

ifconfig

You should see the addresses for your Pi’s network adapters listed. There will be several. My Pi is connected via ethernet. I need the address from the adapter eth0.

Response from ifconfig.

Once you have that information, you are ready co connect. Open the Remote desktop client in your computer and enter your Pi’s IP address as the identifier for the target machine. Once you enter it, you will be greeted with a second login screen that ask you information for the session your wish to start.

PI RDP Login

Leave the default setting of the Session as Xorg. Enter the user ID and password for your Pi. A few moments later you will see the Pis desktop. Note that while many remote desktop clients will default to using the resolution of your local computer’s display, you also have the option of setting a resolution manually. You may want to do this if you are on a slower network connection, or even if you just do not want your remote session to cover all of your local desktop.

Remote Desktop Client Resolution Settings

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet


Sun Gazing Equipment

Today was a nice day. The weather was sunny, but not hot and the sky was fairly clear. I already had my telescope in my car for plans that were not starting until after sunset. But I decided to do a bit of sun gazing while the sun was up. “Sun gazing” is a term that might raise a bit of concern since looking at the sun directly can be damaging to one’s vision. Don’t worry, I wasn’t doing that. I was using proper equipment. I grabbed some video clips from my gazing and shared them on my YouTube and Instagram accounts. This post gives further information about that video.

Acquired for the 2017 eclipse, I have a solar filter that covers my telescope’s opening. These filters block more than 99.9% of sunlight. A hole even as small as a pin head would render the filter unusable by letting too much light in. Without the filter, simply pointing the telescope at the sun could be damaging; there could be heat buildup inside the telescope, and whatever is on the viewing end of the telescope will suffer serious burns with exposure of only a moment.

I have a couple of telescopes at my disposal, but that telescope on the motorized mount is generally preferred for a couple of reasons. One is that it automatically points at the planet, star, or nebula that I select from a menu in a hand controller (after some calibration). Another is that it will automatically adjust in response to the earth’s rotation. This last item might not sound significant, but it is! With my manual telescope, once I’ve found a heavenly body, the body is constantly rotating out of view. With proper alignment the body can be tracked by turning a single knob. But it can be a bit annoying when one looks away for a moment only to return and must hunt down the body of interest. The downside of the motorized mount is the weight and the need for electricity. My full motorized telescope setup is over 100 pounds. At home this isn’t a problem, as I can carry the full assembled setup in and out of my home and connect it to my house’s power. For usage in other locations, I must either bring power with me or have my car nearby to provide electricity.

CGEM II 800 Edge HD

My telescope is a much older unit. It is a Celestron CGEM 800. This specific model is no longer sold since it has been replaced with newer models. With the CGEM 800, there were additional accessories I purchased to add functionality that comes built into some other models. I added GPS to my telescope, which enables it to get the time, date, and the telescope’s location (all necessary information for the telescope to automatically aim at other bodies). I’ve also added WiFi to my telescope. With WiFi, I can control the scope from an app on a mobile device. For some scenarios, this is preferred to scrolling through menus on the two-line text only display on the scope’s hand controller.

While one won’t be viewing any sunspots with it, I also keep a set of eclipse glasses with my setup. I use these when aligning the telescope with the sun. While they are great for looking at the sun, you won’t be able to see anything else through them🙂. If you want to be able to see more details you would need a telescope that filters out specific wavelengths of light. The Meade Solarmax series are great for this. But they are also expensive and only useful for viewing the sun.

Meade Solarmax II
Picture taken from Meade SolarMax (source)

These telescopes cost about 1,800 USD.

At this time of the year from where I live, there are only a few bodies from the solar system visible; the sun and the moon. If I were to use the telescope at 5AM I might be able to catch a glimpse of another planet just before the sun begins to wash out the quality of the image. Not something I’m interested in doing. I’ll take the telescope back out later in the year when there is an opportunity to see more.

On another YouTube channel someone mentioned they thought it would be cool if it were possible to control a telescope with a Raspberry Pi. Well, it’s possible. I might try it out. I’ve controlled my telescope from my own software before, and may try doing it again. Later in the year when the other planets are visible, it might be a great solution for controlling the telescope and a camera to get some automated photographs.

NVIDIA Edge Computing Introduction May 12

NVIDIA is holding a session on an introduction to Edge Computing. The introduction is said to cover fundamentals, how to integrate edge computing to your infrastructure, which applications are best deployed to the edge, and time for Q&A. The conference is at no cost. If you’d like to register for the conference, use this link.



Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Booting a Pi CM4 on NVME

I go through a lot more SD cards than a typical person. I’m usually putting these cards in single board computers like the Jetson Nano or the Raspberry Pi and am using them there. I have a lot of these devices. These cards only occasionally fail. But with a lot of devices “occasional” is frequent enough for my rate of card consumption to be higher than that of a typical consumer. The easy solution to this is to just not use SD cards. At this point, the Pi can boot of USB drives. I’ve generally resisted this for reasons of æsthetics. I just don’t like the U-shapped USB connector (feel free to tell me how silly that is in the comments section).

Enter the Raspberry Pi CM4. These modules have a PCIe interface, and you can select the board that has the hardware that you need. One of those boards is the WaveShare CM4-IO-Base. Among other hardware, this board has a PCIe M.2 keyed slot. There are two versions of this board, version A and version B. The main difference between these is the model B has a real time clock while the model A does not. Otherwise, these boards can be treated as identical

The CM4 IO-BASE-B that I am using sandwiched between acrylic cutouts.

The CM4-IO-BASE has screw holes in positions that are identical to what one would expect for a Raspberry Pi 4B. This makes it compatible with a number of bases on which you might want to attach the board. It does differ from the Pi 4B in that it uses a full-sized HDMI port placed where two of the USB ports are on the Pi 4B. At first glance, it appears to give you less USB and HDMI options than the Pi 4B. But two USB connections and an HDMI connection is available from the underside of the board. You would need to purchase the HDMI+USB Adapter to use those or interface to them directly.

The top of the board has two connectors for cameras, and a connector for an external display. The feature of interest to me was the M.2 PCIe interface on the underside of the board. I decided on a M.2 2242 drive with 256 Gigs of capacity. I’ve seen drives of this size up to 2-terabytes in size (for significantly more).

Getting the Pi to bootup from the NVME isn’t hard. The Compute Module that I have has eMMC memory; that’s basically like having an SD card that you can’t remove. Getting the Pi to boot from the NVME drive involves writing the Pi OS to the MVNE drive and changing the boot order on the Pi. For changing the boot order on the Pi I needed another Linux device. I used another Raspberry Pi.

Writing the image to the NVME drive works in the same way that you would write the image to any other SD card. I happen to have some external NVME drive enclosures removed the drive from one of them and placed my Pi’s NCME drive in it. The Raspberry Pi Imager accepted the drive as a target and wrote the OS to it. The tricky part was modifying the boot order on the CM4.

NVME Drive Enclosure

The default boot order on the CM4 is 0xF461. This is something that didn’t make sense to me the first time that I saw it. The boot order is a numeric value that is best expressed as a hex number. Each digit within that number specifies a boot device. The Pi will start with the boot device that is specified in the lowest hex digit and try it first, and then go to the next hex digit.

DigitDevice
0x1SD
0x2Network
0x3RPI Boot
0x4USB Mass Storage
0x5CM4 USB-C Storage Device
0x6NVME Drive
0xEStop/Halt
0xFReboot
Raspberry Pi BOOT_ORDER

For the boot order 0xF461 the Pi will try to boot to devices in the following order.

  • 0x1 – Boot from the SD Card/eMMC
  • 0x6 – Boot from the NVME drive
  • 0x4 – Boot from a USB mass storage device
  • 0xF – Reboot the pi and try again.

If you have a CM4 with no memory, this means that all you need to do to ensure that the right boot order is followed is to ensure that you don’t have an SD card connected to the board. You are ready to boot from the NVME drive. That’s not my scenario, I had more work to do. I updated the boot order alongside the Pi’s firmware. The CM4 is usually in one of two modes. It is either running normally, in which case the boot loader is locked, or it is in RPI Boot mode, in which case the bootloader can be written to, but the OS isn’t running. The CM4 cannot update its own bootloader. To update the bootloader, another computer is needed. I think that the best option for updating the boot loader is another Linux machine. In my case, I chose another Raspberry Pi.

The Raspberry Pi can already b picky about the power supplies that it works with. I used a USB-C power supply from a Raspberry Pi 400 (the unit built into the keyboard) for the following steps. The usual power supply that I used with my Pi wasn’t sufficient for powering 2 Pis. You’ll find out why it needed to work for 2 Pis in a moment. I used a Raspberry Pi 4B for writting the firmware to the CM4. To avoid confusion, I’m going to refer to these two devices as Programmer Device and the CM4.

On the CM4-IO-BASE board there is a switch or a jumper (depending on hardware revision) for switching the Pi to RPI Boot Mode. Set a jumper on this pin or turn the switch to “ON”. Connect the CM4 to the Programmer Device with a USB-A to USB-C cable. From the programmer device, you will need to replicate a GitHub repository that has all of the code that you need. Open a terminal on the Programmer Device, navigate to a folder in which you want to replicate the code, and use the following commands to clone and build the code.

git clone https://github.com/raspberrypi/usbboot --depth=1
cd usbboot
make

The code is now downloaded or built. Enter the recovery folder and edit the file named boot.conf to change the boot order.

cd recovery
nano boot.conf

At the time of this writing, that file looks like the following.

[all]
BOOT_UART=0
WAKE_ON_GPIO=1
POWER_OFF_ON_HALT=0

# Try SD first (1), followed by, USB PCIe, NVMe PCIe, USB SoC XHCI then network
BOOT_ORDER=0xf25641

# Set to 0 to prevent bootloader updates from USB/Network boot
# For remote units EEPROM hardware write protection should be used.
ENABLE_SELF_UPDATE=1

The line of interest is BOOT_ORDER=0xf25641. The comment in this file already lets you know how to interpret this line. You want the NVME drive (0x6) to be the first drive. To make that happen, the digit 6 needs to be the last digit. Change it to 0xf25416. With this change, the CM4 will try to boot from the NVME first and the eMMC second. IF you ever want to switch back to using the eMMC you only need to remove the NVME drive. There is a file named pieeprom.original.bin. This is going to be written to the CM4. To ensure that the CM4 has the latest [stable] firmware, downloaded the latest version from https://github.com/raspberrypi/rpi-eeprom/tree/master/firmware/stable and overwrite this file. Looking in that folder right now, I see the most recent file is only 15 hours old and named pieeprom-2022-02-10.bin. To download this from the terminal, use the following command.

wget https://github.com/raspberrypi/rpi-eeprom/raw/master/firmware/stable/pieeprom-2022-03-10.bin -O pieeprom.original.bin

After the file is downloaded, run the update script to assemble the new firmware image.

./update-pieeprom.sh

Navigate to the parent folder. Run the rpiboot utility with the recovery option to write the firmware to the device.

sudo ./rpiboot -d recovery

This command should only take a few seconds to run. When it is done you should see a green light blinking on the Pi signaling that it has updated its EEPROM. Disconnect the CM4 from the Programmer Device. Remove the jumper or set the RPI Boot switch to off. Connect the Pi to a display and power supply. You should for a brief moment see a message that the Pi is expanding the drive partition. After the device reboots it will be running from the NVME.

At this point my primary motivation for using the CM4-IO-BASE-B board has been achieved. But there is some additional hardware to consider. If you have the CM4-IO-BASE model B then there is a real time clock to setup. For both models, there is fan control available for setup.

Real Time Clock Setup

The real time clock interfaces with the Pi via I2C. Ensure that I2C is enabled on your Pi by altering the file boot/config.txt.

sudo nano /boot/config.txt

Find the line of the file that contains dtparam=audio=on and comment it out by placing a # at the beginning of the line. Add the following line to config.txt to ensure I2C is enabled.

dtparam=i2c_vc=on

Reboot the device. With I2C enabled you can now interact with the RTC through code. Waveshare provides sample code for reading and writing from the clock. The code in its default state is a good starting point, but not itself adequate for setting the clock. The code is provided for both the C language and Python. I’ll bu using the C-language version of the code. To download the code, use the following commands.

sudo apt-get install p7zip-full
sudo wget https://www.waveshare.com/w/upload/4/42/PCF85063_code.7z
7z x PCF85063_code.7z -O./
cd PCF85063_code

After downloading the code, enter the directory for the c-language project and build and run it using the following commands.

cd c
sudo make clean
sudo make -j 8
sudo ./main

You’ll see the output from the clock. Note that the clock starts from just before midnight of February 28, 2021 and progresses into March 1. The code has the starting date hard coded. Let’s look at the code in main.c to see what it is doing.

#include <stdio.h>		//printf()
#include <stdlib.h>		//exit()
#include <signal.h>     //signal()

#include "DEV_Config.h"
#include <time.h>
#include "waveshare_PCF85063.h"

void  PCF85063_Handler(int signo)
{
    //System Exit
    printf("\r\nHandler:exit\r\n");
    DEV_ModuleExit();

    exit(0);
}

int main(void)
{
	int count = 0;
	// Exception handling:ctrl + c
    signal(SIGINT, PCF85063_Handler);
    DEV_ModuleInit();
    DEV_I2C_Init(PCF85063_ADDRESS);
	PCF85063_init();
	
	PCF85063_SetTime_YMD(21,2,28);
	PCF85063_SetTime_HMS(23,59,58);
	while(1)
	{
		Time_data T;
		T = PCF85063_GetTime();
		printf("%d-%d-%d %d:%d:%d\r\n",T.years,T.months,T.days,T.hours,T.minutes,T.seconds);
		count+=1;
		DEV_Delay_ms(1000);
		if(count>6)
		break;
	}
	
	//System Exit
	DEV_ModuleExit();
	return 0;
}

You can see where the time is set with the functions PCF85063_SetTime_YMD and PCF85063_SetTime_HMS. Let’s update this to use the date/time that the system is using. Place the following two lines above those two functions. This will only grab the system time and print it.

    time_t T = time(NULL);
    struct tm tm = *localtime(&T);

    printf("***System Date is: %02d/%02d/%04d***\n", tm.tm_mday, tm.tm_mon + 1, tm.tm_year + 1900);
    printf("***System Time is: %02d:%02d:%02d***\n", tm.tm_hour, tm.tm_min, tm.tm_sec);

Build and run the program again by typing the following two lines from the terminal.

sudo make -j 8
sudo ./main

This time the program will print the actual current date and time.

USE_DEV_LIB
Current environment: Debian
DEV I2C Device
DEV I2C Device
***System Date is: 20/03/2022***
***System Time is: 19:19:06***
21-2-28 23:59:58
21-2-28 23:59:59
21-3-1 0:0:0
21-3-1 0:0:1
21-3-1 0:0:2
21-3-1 0:0:3
21-3-1 0:0:4

Let’s pass in this information to the calls that set the date and set the time. The information that we need is in the tm structure. Note that in this structure the first month of the year is associated with the value 0. Also note that the tm structure stores the year as the number of years since 1900, while the RTC stores the year as the number of years since 2000. We need to shift the value by 100 to account for this difference. The updated lines of code look like the following.

    printf("***System Date is: %02d/%02d/%04d***\n", tm.tm_mday, tm.tm_mon + 1, tm.tm_year + 1900);
    printf("***System Time is: %02d:%02d:%02d***\n", tm.tm_hour, tm.tm_min, tm.tm_sec);
	PCF85063_SetTime_YMD(tm.tm_year - 100,tm.tm_mon + 1,tm.tm_mday);
	PCF85063_SetTime_HMS(tm.tm_hour,tm.tm_min,tm.tm_sec);

When you run the program again, you’ll see the current time. But how do we know the RTC is really retaining the time? One way is to run the program again with the calls that set the time commented out. One would expect the RTC to continue to show the real time based on the previous call. I tried this, and the RTC was printing out times from 01-01-01. Why did this happen?

I’ve not completely dissected the code, but I did fine that a call to PCF85063_init() at the beginning of main resets the clock. I just commented this out. With that call not being made, the time is retained. I use this call when setting the clock though. I’ve altered the program to accept a command line parameter. If setrtc is passed to the program as a command line argument it will set the time on the RTC. If setsystem is passed as the parameter then the program will attempt to set the system time. Setting the system time requires root privileges. If you try to set the time with this program without running as root then the attempt will fail.

The final version of this code is available in my GitHub account. You can find it here.

Fan Control

There’s a difference in the version A and version B for the fan control. On version A the fan is connected to port 18. It can be turned on and off by changing the state of this pin. For version B the fan is controlled through the I2C bus. Example code is also provided for fan control on version B. To download the fan code for version-B use the following commands from the terminal.

sudo apt-get install p7zip-full
sudo wget https://www.waveshare.com/w/upload/5/56/EMC2301_code.7z
7z x EMC2301_code.7z -O./
cd EMC2301_code

To build the code, use the following commands.

cd c
sudo make clean
sudo make -j 8
sudo ./main

Let’s look at a highly abridged version of the code.


		EMC2301_start();
	/*********************************/	
		EMC2301_setSpinUpDrive(60);
		EMC2301_setSpinUpTime(300);
		EMC2301_setDriveUpdatePeriod(100);
		EMC2301_RPMEnable();
			
		EMC2301_writeTachoTarget(8192);
		for(int i=0;i<10;i++)
		{
			EMC2301_fetchFanSpeed();
			DEV_Delay_ms(500);
		}

Fan control is straight forward. After some setup calls, the fan speed can be set by writing to EMC2301_writeTachoTarget(). The call to EMC2301_fetchFanSpeed() will read the current fan speed. Through repeated calls to this function you can see the acceleration of the fan when the speed is changed.

Other Hardware

Take note that a number of interfaces are disabled by default on the CM4. This includes the USB-C, the two DSI camera ports, and the display connector. If you need to use any of these, the resources page for this board has the information that needs to be added to the

Conclusion

Pi setup for this board was pretty easy. I’d definitely consider getting another one. If I had to do things all over again though I would double-check my cables. There was a moment when I thought things were not working because I wasn’t getting a video signal. It turns out that I had two HDMI cables close to each other that I thought was a single cable. I didn’t get a video signal because I had connected to a cable that was not terminating at my display (time to consider cable organization). This is a great board if you need a Pi that is close to the usual form factor but with more memory. I might consider another if I can acquire another CM4 (which is difficult in this chip shortage).

Resources


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Running WordPress on a NVIDIA Jetson or Raspberry Pi

As part of an exploration on hosting sites and services with a minimal hardware setup, I wanted to install WordPress on a Raspberry Pi. WordPress is an open-source software system for hosting sites and blogs. I’m trying it out because I thought it would be easy to install and setup and allow someone to manage posts without demanding they be familiar with HTML and other web technologies (though knowing them certainly helps). With the Raspberry Pi being an ARM based Linux computer, I also thought that these instructions might work on a NVIDIA Jetson with little alteration. When I tried it out, I found that these instructions work on the Jetson with no alteration needed at all. In this post I only show how to install WordPress and its dependencies. I’ll cover making the device visible to the Internet in a different post.

To get started, make sure that your Jetson or Raspberry Pi is up to date. Run the following two commands.

sudo apt-get update
sudo apt-get upgrade

These commands could take a while to run. Once they have finished, reboot your device.

Not to install the software. I’m connected to my device over SSH. You can run these commands directly through a terminal on the devicealso. But everything that I write is from the perspective of having access only to the terminal. We are going to install the Apache web server, a MySQL database, and a PHP interpreter.

Apache Web Server

To install the Apache Web Server, type the following command.

sudo apt-get install apache2

After running for a while, Apache should successfully install. You can verify that it is installed by opening a browser to your device’s IP address. From the terminal, you can do this with the following command.

lynx http://localhost

You should see the default Apache page display. To exit this browser press the ‘Q’ key on your keyboard and answer ‘y’ to the prompt.

Installing PHP

To install PHP on your device, use the following command.

sudo apt-get install php

With the PHP interpreter in place, we can add a page with some PHP code to see it processed.

Navigate to the folder that contains the Apache HTML content and add a new page named test-page.php

cd /var/www/html
sudo nano test-page.php

The file will have a single line as its content. Type the following.

<?php echo "Hey!"; ?>

You can now navigate to the page in a browser.

lynx http://localhost/test-page.php

Installing the Database

Maria Database is a mySQL database. It will contain the content for our site. Install it with the following command.

sudo apt-get install mariadb-server

The database is installed, but it needs to be configured. To access it, we need to setup a user account and a password. Decide what your user ID and password will be now. Also choose a name for the database. You will need to substitute my instances of USER_PLACEHOLDER, PASSWORD_PLACEHOLDER, and DATABASE_PLACEHOLDER with the names and passwords that you have chosen.

sudo mysql -uroot

You will be presented with the MariaDB prompt. Type the following commands to create your user account, database, and to give permission to the database.

CREATE USER 'USER_PLACEHOLDER'@'localhost' IDENTIFIED BY 'PASSWORD_PLACEHOLDER';
CREATE DATABASE DATABASE_PLACEHOLDER;
GRANT ALL ON DATABASE_PLACEHOLDER.* to 'USER_PLACEHOLDER'@'localhost';
quit;

We need to make sure that account can access the database. Let’s connect to the database using the account that you just created.

mysql -u USER_PLACEHOLDER -p

You will be prompted to enter the password that you choose earlier. After you are logged in, type the following to list the databases.

SHOW DATABASES;

A list of the databases will show, which should include a predefined system database and the one you just created.

We also need to install a package so that PHP and MySQL can interact with each other.

sudo apt-get install php-mysql

Installing WordPress

The downloadable version of WordPress can be found at wordpress.org/download. To download it directly from the device to the web folder use the following command.

sudo wget https://wordpress.org/latest.zip -O /var/www/html/wordpress.zip

Enter the folder and unzip the archive and grant permissions to Apache for the folder.

cd /var/www/html
sudo unzip wordpress.zip
sudo chmod 755 wordpress -R
sudo chown www-data wordpress -R

We are about to access our site. It can be accessed through the devices IP address at http://IP_ADDRESS_HERE/wordpress. As a personal preference, I would prefer for the site suffix to be something other than wordpress. I’m changing it to something more generic, “site”.

mv wordpress site

Now let’s restart Apache.

sudo service apache2 restart

From here on I am going to interact with the device from another computer with a desktop browser. I won’t need to do anything in the device terminal. Using a browser on another computer I navigate to my device’s IP address in the /site folder. The IP address of my device is 192.168.50.216. The complete URL that I use is http://192.168.50.216/site. When I navigate there, I get prompted to select a language.

A screenshot of the language selection screen on the Raspberry Pi. This is the first screen that you will encounter when WordPress is served from the Pi for the first time.
Word Press Language Prompt

The next page lets you know the information that you will need to complete the setup. That information includes

  • The database name
  • The database user name
  • The database password
  • The database host
  • The Table name prefix

The first three items should be familiar. The fourth item, the database host, is the name of the machine that has the database. Since we are running the database and WordPress from the same device this entry will be “localhost”. If we were running more than one site from this device to keep the databases separate, the tables for each instance could have a common prefix. I’m going to use the prefix wp_ for all of the tables. All of this information will be saved to a file named wp-config.php. If you need to change anything later your settings can be modified from that file.

These are the default settings for WordPress. The first three fields must be populated with the information that you used earlier.
Default WordPress Settings

Enter your database name, user name, and password that you decided earlier. Leave the host name and the table prefix with their defaults and click on “submit.” If you entered everything correctly, on the next screen you will be prompted with a button to run the installation.

WordPress prompt to run the installation. This shows after successfully configuring it to access the database.

On the next page you must choose some final settings of your Word Press configuration.

Final Setup Screen

After clicking on “Install WordPress” on this screen, you’ve completed the setup. With the instructions as I’ve written them, the site will be in the path /wordpress. The administrative interface will be in the path /wordpress/wp-admin. WordPress is easy to use, but a complete explanation of how it works could be lengthy and won’t be covered here.


Creating a Service on a Raspberry Pi or Jetson Nano

Creating a service on a Raspberry Pi or a Jetson is easier than I thought. At the same time, there is still a lot of information to sort through. I’m still exploring the various settings that can be applied to a service. But I wanted to share the information that I thought would be immediately useful. While I was motivated to explore this based on something I was doing on a Jetson Nano, the code and instructions work identically without any need for modification on a Raspberry Pi.

I have a Jetson Mate. The Jetson Mate is an accessory for the Jetson Nano or Jetson Xavier NX Modules. Up to 4 modules can be placed within the Jetson mate to form a cluster. Really the Jetson Mate is just a convenient way to power multiple Jetsons and connect them to a wired network. It contains a 5-port switch so that a single Network cable can be used to connect all of the modules. Despite the Jetsons being in the same box, they don’t have an immediate way to know about each other. Reading the documentation from Seeed Studio, they suggest logging into your router and finding the IP addresses there.

That approach is fine when I’m using the Jetsons from my house; I have complete access to the Network here. But that’s not always possible. On some other networks I may not have access to the router settings. I made a program that would let the Jetson’s announce their presence over UDP Multicast. This could be useful on my Pis also; I run many of them as headless units. I needed for this program to start automatically after the device was powered on and to keep running. How do I do that? By making it a service.

There are several ways that one could schedule a task to run on Linux. I’m using systemd. Systemd was designed to unify service configurations across Linux distributions. The information shown here has applicability well beyond the Pi and Jetson.

The details of how my discovery program works is a discussion for another day. Let’s focus on what is necessary for making a service. For a sample service, let’s make a program that does nothing more than increment a variable and output the new value of the variable. The code that I show here is available on GitHub ( https://github.com/j2inet/sample-service ). But it is small enough to place here also. This is the program.

#include <iostream>
#include <thread>

using namespace std;

int main(int argc, char** argv) 
{
    int counter = 0;
    while(true)
    {
        cout << "This is cycle " << ++counter << endl;
        std::this_thread::sleep_for(std::chrono::seconds(10));
    }
}

This program counts, outputting a digit once ever ten seconds. To build the program, you will need to have cmake installed. To install it, use the following command at the terminal.

sudo apt-get install cmake -y

Once that is installed, from the project directory only a couple of commands are needed to compile the program.

cmake ./
make

The program is built, and a new executable named service-sample is now in the folder. If you run it, you will see the program counting. Press CTRL-C to terminate the program. Now we are going to make it into a service.

To make a service, you will need to copy the executable to a specific folder and also provide a file with the settings for the service. For the service settings, I’ve made a file named similarly to the executable. This isn’t a requirement. But it’s something that I’ve chosen to do for easier association. In a file named service-sample.service I’ve place the settings for the service. Many of these settings are technically optional; you only need to set many of them if your specific service is dependent on them. I’m showing more than is necessary for this service because I think some of these settings will be useful to you for other projects and wanted to provide an example.

[Unit]
Description=Counting service.
Wants=network.target
After=syslog.target network-online.target

[Service]
Type=simple
ExecStart=/usr/local/bin/service-sample
Restart=on-failure
RestartSec=10
KillMode=process

[Install]
WantedBy=multi-user.target

Here are what some of those settings mean. Note that I also describe some other settings that are not used, but available for you to consider. You can also see documentation for this file in the man pages.

[Unit] section

Documentation viewable with the following command

man systemd.unit

SettingMeaning
DescriptionA short text description of the service.
DocumentationURIs at which documentation for the service can be found
RequiresOther units that will be activated or deactivated in conjunction with this unit
WantsExpress weak dependencies. Will try to activate these dependencies first, but if those dependencies fail, this unit will be unaffected
ConflictsThis setting prevents this unit from running at the same time as a conflicting unit
After/BeforeUsed to express the order in which units are started.These settings contain a space delimited list of unit names.

[Install] Section

Documentation for the [Install] section is viewable at the following URL

SettingMeaning
RequiredBy / WantedByStarts the current service if any of the listed services are started. WantedBy is a weaker dependency than RequiredBy.
AlsoSpecifies services that are to be started or disabled along with this service

[Service] Section

Documentation for the [Service] section is viewable from the following URL.

man systemd.service

SettingMeaning
Type* simple – (default) starts the service immediately
* forking – the service is considered started once the process has forked and the parent has exited
* oneshot – similar to simple. Assumes service does its job and exits.
* notify – considers a service started when it sends a signal to systemd


ExecStartCommands with arguments to execute to start the service. Note that when Type=oneshot that multiple commands can be listed and executed sequentially.
ExecStopCommands to execute to stop the service
ExecReloadCommands to execute to trigger a configuration reload of the service
RestartWhen this option is enabled, the service will be restarted when the service process exits or is killed
RemainAfterExitWhen True, the service is considered active even after all of its processes have exited. Mostly used with Type=oneshot.

Deploying

Having the executable and this service file are not themselves enough. They must also be moved to an appropriate location and the service must be activated. I’ve placed the steps for doing this in a script. This script is intentionally a bit verbose to make it clear what the script is doing at any time. The first thing that this script does is terminate the service. While this might sound odd given that we haven’t installed the service yet, I do this to make the script rerunnable. If this is not the first time that the script has run, it is possible that the service process is running. To be safe, I terminate it.

Next, I copy files to their appropriate locations. For this simple service those files are one executable binary and the service settings. The executable is placed in /usr/local/bin. The service settings are copied to /etc/systemd/system/. On the service settings, the permissions on it are changed with chmod. This will ensure the owner has read/write permissions and the group has read permissions.

With the files for the service in place, we next ask systemd to reload the service definitions. I then probe the status for my service. While my service isn’t running, I should see it listed. I then enable the service (so that it will run on system startup) and then start the service (so that I don’t need to reboot to see it running now) and then probe the system status again.

echo "stopping service. Note that the service might not exists yet."
sudo systemctl stop service-sample

echo "--copying files to destination--"
sudo cp ./service-sample /usr/local/bin
sudo cp ./service-sample.service /etc/systemd/system/service-sample.service
echo "--setting permissiongs on file--"
sudo chmod 640 /etc/systemd/system/service-sample.service
echo "--reloading daemon and service definitions--"
sudo systemctl daemon-reload
echo "--probing service status--"
sudo systemctl status service-sample
echo "--enabling service--"
sudo systemctl enable service-sample
echo "--starting service service status--"
sudo systemctl start service-sample
echo "--probing service status--"
sudo systemctl status service-sample

After the service is installed and running, you can use the command for probing the status to see what it is up too. The last few lines that the service has outputted will display with the service information. Probe the service status at any time using this command.

sudo systemctl status service-sample

Sample output from the command follows.

pi@raspberrypi:~ $ sudo systemctl status service-sample
● service-sample.service - Counting service.
   Loaded: loaded (/etc/systemd/system/service-sample.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-03-09 15:57:12 HST; 12min ago
 Main PID: 389 (service-sample)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/service-sample.service
           └─389 /usr/local/bin/service-sample

Mar 09 16:09:29 raspberrypi service-sample[389]: This is cycle 361
Mar 09 16:09:31 raspberrypi service-sample[389]: This is cycle 362
Mar 09 16:09:33 raspberrypi service-sample[389]: This is cycle 363
Mar 09 16:09:35 raspberrypi service-sample[389]: This is cycle 364
Mar 09 16:09:37 raspberrypi service-sample[389]: This is cycle 365
Mar 09 16:09:39 raspberrypi service-sample[389]: This is cycle 366
Mar 09 16:09:41 raspberrypi service-sample[389]: This is cycle 367
Mar 09 16:09:43 raspberrypi service-sample[389]: This is cycle 368
Mar 09 16:09:45 raspberrypi service-sample[389]: This is cycle 369
Mar 09 16:09:47 raspberrypi service-sample[389]: This is cycle 370
pi@raspberrypi:~ $
Screenshot of service output. Note the green dot indicates the service is running.

The real test for the service comes after reboot. Once you have the service installed and running on your Jetson or your Pi, reboot it. After it boots up, probe the status again. If you see output, then congratulations, your service is running! Now that a service can be easily created and registered, I’m going to refine the code that I used for discovery of the Pis and Jetsons for another post.

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Emoji Only Text Entry on Android

Android supports a number of different text input types. If you create a text field in which someone is intended to enter a phone number, address, email address, you can set the text input type on the text field to have the keyboard automatically restrict what characters it presents to the user.

<EditText android:inputType="phone" />

I was working on something for which I needed to ensure that the user selected an emoji. Unfortunately, there’s not an input type to restrict a user to emoji. How do I ensure taht the user can only enter emoji? I could implement my own keyboard that only displays emoji, but for the time being I do not what to implement such a thing for the application that I’m building. There are a number of different possible solutions for this. The one I chose was to make a custom text input field and an InputFilter for it.

Making a custom field may sound like a lot of work, but it isn’t. The custom field itself is primary declarations with only a single line of initialization code that applies the filter. The real work is done in the filter itself. For a custom InputFilter, make a class that derives from InputFilter. There is a single method on the class to define named filter.

The filter method receives the text sequence that is going to be assigned to the text field. If we want to allow the field value to be assigned to the text field, the function can return null. If there is a character of which I don’t approve, I’ll return an empty string which will clear the text field. If I wanted to replace a character, I could return a string in which I had performed the replacement. The value returned by this function will be applied to the text field.

In my implementation for this function, I step through each character in the string (passed as a CharSequence, a String is a type of CharSequence) and check the class of each character. If the character is not of acceptable class, then I return an empty string to clear the text field. For your purposes, you may want to strip characters out and return the resulting string.

The function Character::getType will return the character type or class. To ensure that the character is an emoji, I check to see if the type value equals to Character::SURROGATE or Character::OTHER_SYMBOL.

private class EmojiFilter : InputFilter {

    override fun filter(
        source: CharSequence?,
        start: Int,
        end: Int,
        dest: Spanned?,
        dstart: Int,
        dend: Int
    ): CharSequence? {
        for( i in start .. end-1) {
            var type = Character.getType(source!!.get(i)).toByte()
            if(type != Character.SURROGATE && type != Character.OTHER_SYMBOL) {
                return "xx";
            }
        }
        return null;
    }
}

Now that the filter class is defined, I can create my custom text field. It has next to no code within it.

public class FilteredEditText : androidx.appcompat.widget.AppCompatEditText {

    constructor(context:Context, attrs: AttributeSet?, defStyle:Int):super(context,attrs,defStyle)
    {
    }

    constructor(context:Context, attrs: AttributeSet?):super(context,attrs)
    {
    }

    constructor(context:Context) : super(context) {
    }

    init{
        filters = arrayOf(EmojiFilter())
    }

}

Great, we have out custom class! Now how do we use it in a layout? We can use it in the layout by declaring a view with the fully qualified name.

    <net.j2i.emojidiary.FilteredEditText
        android:text="😀"
/>

And that’s it. While Android doesn’t give the option of showing only the Emoji characters, with this in place I’m assured that a user will only be able to enter Emoji characters.

Working with the Phidgets RFID Reader

I recently worked on a project that made use of Phidgets hardware. Phidgets has many offers for interfacing hardware sensors to a computer using USB. Provided that the drive is installed, using the service is pretty straight forward. They have software components available for a variety of languages and operating systems. For the project that I was working on, a computer would have multiple Phidget RFID readers. When there are multiple instances of hardware on a system, a specific instance can be addressed through it’s serial number. On this project, the serial numbers were stored in a configuration file. For the single machine on which this project was deployed, that was fine. Once that was a success the client wanted the software deployed to another 7 machines.

The most direct strategy for this would be to make a configuration file for each machine that had its specific serial numbers in it. I did this temporarily, but I am not a fan of having differences in deployment files. A simple mistake could result in the wrong configuration being deployed and a non-working software system. Deployments would be frequent because users were interacting with the software during this phase of development. An unexpected problem we encountered was someone disconnected hardware from one computer and moved it to another. Why they decided to perform this hardware swap is not known to me. But it resulted in two machines sensors that were no longer responsive.

After a little digging I found a much better solution.

For some reason the Phidgets code examples that I encounter don’t mention this, but it is also possible to get notification of a Phidgets device being connected or disconnected from the computer and the serial number of the device being references. I used a .Net environment for my development, but this concept is applicable in other languages too. I’ll be using .Net in my code examples.

In the .Net environment, Phidget’s offers a class named Manager in their SDK. The Manager class when instantiated raises Attached and Detached events each time an item of hardware is connected or disconnected from the system. If the class is instantiated after hardware has already been connected, it will raise Attached events for each item of hardware that is already there. The event argument object contains a member of the class Phidget that among other data contains the serial number of the item found and what type of hardware it is. For the application I was asked to update, I was only interested in the RFID reader hardware. I had the code ignore any Phidget of any other type. While it is unlikely the client would randomly connect such hardware to the machines running the solution, for the sake of code reuse it is better to have such protection in the application.

Let’s make an application that will list the RFID readers and the RFID readers detected by each tag. I’m writing this in WPF. Not shown here is some of the typical code you might find in such a project such as a ViewModel base class. In my sample project, I wrapped the Phidgets Manager class in another class that will also keep a list of the Phidget object instances that it has found. An abbreviated version of the class follows. The Phidget’s Manager class will start raising Attached and Detached events once its Open() method has been called. If there is already hardware attached, expect events to be raised immediately.

    public partial class PhidgetsManager:IDisposable
    {
        Manager _manager;
        List<Phidget> _phidgetList = new List<Phidget>();
        public PhidgetsManager()
        {
            _manager = new Manager();
            _manager.Attach += _manager_Attach;
            _manager.Detach += _manager_Detach;
        }

        private void _manager_Detach(object sender, Phidget22.Events.ManagerDetachEventArgs e)
        {
            _phidgetList.Remove(e.Channel);
            OnPhidgetDetached(e.Channel);
        }

        private void _manager_Attach(object sender, Phidget22.Events.ManagerAttachEventArgs e)
        {
            _phidgetList.Add(e.Channel);
            OnPhidgetAttached(e.Channel);
        }
        public enum Action
        {
            Connected, 
            Disconnected
        }

        public class PhidgetsActionEventArgs
        {
            public Phidget Phidget { get; internal set;  }
            public Action Action { get; internal set; }
        }

        public delegate void PhidgetsActionEvent(object sender, PhidgetsActionEventArgs args);
        public event PhidgetsActionEvent DeviceAttached;
        public event PhidgetsActionEvent DeviceDetached;

        protected void OnPhidgetAttached(Phidget p)
        {
            if (DeviceAttached != null)
            {
                var arg = new PhidgetsActionEventArgs()
                {
                    Action = Action.Connected,
                    Phidget = p
                };
                DeviceAttached(this, arg);
            }
        }

        protected void OnPhidgetDetached(Phidget p)
        {
            if (DeviceDetached != null)
            {
                var arg = new PhidgetsActionEventArgs()
                {
                    Action = Action.Connected,
                    Phidget = p
                };
                DeviceDetached(this, arg);
            }
        }

    }

In my MainViewModel I only wanted to capture the RFID readers. It has its own list for maintaining these. When a device of the right type is found, I create anew RFID number and assign its DeviceSerialNumber. When the class’s Open() method is called, it will attach to the correct hardware since the serial number has been set. The RFID instance is added to my list. My list uses a wrapper class that exposes the device serial number and the current RFID tag that the device sees. This is exposed through a ViewModel object so that the UI will automatically update.

    public class RFIDReaderViewModel: ViewModelBase, IDisposable
    {
        RFID _reader;

        public RFIDReaderViewModel(Phidget phidget)
        {
            if(phidget == null)
            {
                throw new ArgumentNullException("phidget");
            }
            if(phidget.ChannelClass != ChannelClass.RFID)
            {
                throw new ArgumentException($"Phidget must be an RFID Reader. The received item was a {phidget.ChannelClassName}");
            }
            this.Reader = new RFID();
            this.Reader.DeviceSerialNumber = phidget.DeviceSerialNumber;
            this.Reader.Tag += Reader_Tag;
            this.Reader.TagLost += Reader_TagLost;
            this.Reader.Open();
        }

        private void Reader_TagLost(object sender, Phidget22.Events.RFIDTagLostEventArgs e)
        {
            Dispatcher.CurrentDispatcher.Invoke(() =>
            {
                CurrentTag = String.Empty;
            });
        }

        private void Reader_Tag(object sender, Phidget22.Events.RFIDTagEventArgs e)
        {
            Dispatcher.CurrentDispatcher.Invoke(() =>
            {
                CurrentTag = e.Tag;
            });
        }

        public RFID Reader
        {
            get { return _reader;  }
            set { SetValueIfChanged(() => Reader, () => _reader, value); }
        }
        String _currentTag;
        public String CurrentTag
        {
            get { return _currentTag;  }
            set { SetValueIfChanged(() => CurrentTag, ()=>_currentTag, value);  }
        }

        public void Dispose()
        {
            try
            {
                this.Reader.Close();
            }
            catch (Exception exc)
            {

            }
        }
    }

All that’s left is the XAML. In the XAML for now, I’m only interested in listing the serial numbers and the tag strings.

<UserControl x:Class="PhidgetDetectionDemo.Views.MainView"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:local="clr-namespace:PhidgetDetectionDemo.Views"
             >
    <Grid>
        <ListView ItemsSource="{Binding ReaderList}">
            <ListView.ItemTemplate>
                <DataTemplate>
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition Width="*" />
                            <ColumnDefinition Width="*" />
                        </Grid.ColumnDefinitions>
                        <TextBlock Text="{Binding Reader.DeviceSerialNumber}" />
                        <TextBlock Grid.Column="1" Text="{Binding CurrentTag}"/>
                    </Grid>
                </DataTemplate>
            </ListView.ItemTemplate>
        </ListView>
    </Grid>
</UserControl>

With that in place, now when I run the project, I get live updates of RFID tags coming into or out of range of each of the readers connected.

If you try to run this though, you may encounter a problem. The RFID readers (as the name implies) use radio frequencies for their operation. If they are close to each other, they may interact and interfere with each other. Don’t worry, this doesn’t mean that you can’t use them in close proximity. In my next entry I’ll show how to deal with RFID readers that are close to each other and mitigate interference through software.


Android Multi-Phone Debugging

I’m working on an application that uses Android’s WiFi P2P functionality so that two phones can communicate directly with each other. Because of the nature of this application, I need to have two instances of this program running at once. The only problem is that Android Studio only let’s me have one debug target at a time. I thought of a few potential solutions.

  • * Deploy to one phone, start the application, then start debugging on the second
  • * Use a second computer and have each phone connected to a computer

Both of these solutions have shortcomings. They are both rather cumbersome. While this isn’t a supported scenario yet, there’s a better solution. We can use two instances of Android Studio on the same computer to open the project. We need a little bit of support from the operating system to pull this off. Android will otherwise see that we are opening a project that is already open otherwise. Before doing this, we need to make a symbolic link to our project.

A symbolic link is an entry in the file system that has its own unique path but points to existing data on the file system. Using symbolic links, a single file could be accessed through multiple paths. To Android Studio, these are two separate projects. But since it is the same data streams on the file system, the two “instances” will always be in sync. There are some files are are going to be unique to one instance or the other, but we will cover that in a

Symbolic links are supported on Windows, macOS, and on Linux. To make a symbolic link on macOS, use the ln command.

ln -s /original/path /new/linked/path

On Windows, use the mklink command.

mklink /j  c:\original\path c:\linked\path

Make sure that Android Studio is closed. Make a copy of your project’s folder. In the new copy that you just made, you are going to erase most of the files. Delete the folder app/src in the copy. Open a terminal and navigate to the root of the copied project. In my case the original project is called P2PTest and the other is called P2PCopy. To make the symbolic link for the src file I use the following command.

ln -s ../P2PTest/app/src app/src

Some other resources that I’ve looked at suggest doing the same thing for the the project’s build.gradle and the build.gradle for each module. For simple projects, the only module is the app. I tried this and while it worked fine for the project’s build.gradle, I would always get errors about a broken symbolic link when I tried with the build.gradle at the module level. In the end, I only did this for the project level.

## ln -s ../P2PTest/app/build.gradle app/build.gradle ## this line had failed results
ln -s ../P2PTest/build.gradle build.gradle

Because I could not make changes to the module’s build.gradle, if there are changes made to it then it will need to be copied between the instances of the project. Thankfully, most changes to a project will be in the source files. While it is possible to edit the source files from eithe project, I encourage only editing from the primary project. This will help avoid situations where you have unsaved changes to the same file in different editors and have to manually merge them.

When you are ready to debug, you can set one instance of Android Studio to one of your phones, and the other instance to the other phone. Here, I have two instances set to deploy to a Galaxy Note 5 and a Galaxy Note 8.

Using My Phone as a Web Server – Introduction

I’m back from a recent trip out of the country. While the facility I was staying at would have an Internet connection, the price for Internet access for about a week was a little over 100 USD. I’d rather go without a connection. While I didn’t have access to the Internet, I did have access to a local network. I considered options on how to bring media with me. Rather than bring a few movies and songs here and there I wanted to indiscriminately copy what I could to a drive. In addition to myself, there were three other people with me that might also want to view the media. It made since to host the media on a pocket-sized web server. I setup a Raspberry Pi to do just this and took it with me on the trip.

After the trip was over, I thought to myself that there should be a way to do the same thing in a more compact package. I started to look at what the smallest Raspberry Pi Computer Module based setup would look like, and as I mentally constructed a solution in my mind, I realized it was converging to the same form factor as a phone. I’ve got plenty of old phones lying about. While I wouldn’t suggest this as a general solution (phones are a lot more expensive than a Pi) it is what I decided to have fun with.

Extra, unused Android devices.

There are various ways to run NodeJS on a phone and some other apps in the app store that let you host a web server on your phone. I didn’t use any of these. I am reinventing the wheel simply because I find enjoyment in creating. It was a Sunday night, I was watching my TV lineup, and decided to make a simple proof of concept. I only wanted the PoC to listen for incoming request and send a hard coded HTML page back to the client. I had that working in no time! I’ll build upon this to give it the ability to host static files and media files when I do a future update on this. I’m taking a moment to talk about how I build this first.

I created a new Android project. Before writing code, I declared a few permissions. I like to do this first so that later on I don’t have to wonder why a specific call failed. The permissions I added are for Internet access, accessing the WiFi state, and access to the Wake Lock to keep the device from completely suspending. For what I show here, only Internet capabilities are going to be used. You can choose to omit the other two permissions for this version of the program.

With the permissions in place, I started writing the code. There are only three classes used in the web server (counting an interface as a class).

  • WebServer – Listens for incoming request and passes them off to be handled as they com in
  • ClientSocketHandler – Processes an incoming class and gives the response
  • IStatusUpdater – used for passing status information back to the UI

The WebServer class accepts in its constructor the port on which it should run and a Context object, which is needed for some other calls. A WebServer instance does not begin to listen for connections until the start() method is called. Once it is started, the device retrieves the address of the WiFi adapter and creates a socket that is bound to this address. A status message is also sent to the UI so that it can show the device’s IP address. The class then creates the thread that will listen for incoming connections.

In the function listenerThread(), the class waits for an incoming connection. As soon as it receives one, it creates a new ClientSocketHandler with the socket and lets the ClientSocketHandler process the request and immediately goes back to listening for another connection. It doesn’t wait for the ClientSocketHandler to finish before waiting for the next connection.

package net.j2i.webserver.Service
import android.content.Context
import android.net.wifi.WifiManager
import android.text.format.Formatter
import java.net.InetSocketAddress
import java.net.ServerSocket
import java.net.Socket
import kotlin.concurrent.thread
class WebServer {
    companion object {
    }
    val port:Int;
    lateinit var receiveThread:Thread
    lateinit var  listenerSocket:ServerSocket;
    var keepRunning = true;
    val context:Context;
    var statusReceiver:IStatusUpdateReceiver
    constructor(port:Int, context: Context) {
        this.port = port;
        this.context = context;
        this.statusReceiver = object : IStatusUpdateReceiver {
            override fun updateStatus(ipAddress: String, clientCount: Int) {
                fun updateStatus(ipAddress: String, clientCount: Int) {
                }
            }
        }
    }
    fun start() {
        keepRunning = true;
        val wifiManager:WifiManager =
            this.context.getSystemService(Context.WIFI_SERVICE) as WifiManager;
        val wifiIpAddress:String = Formatter.formatIpAddress(wifiManager.connectionInfo.ipAddress);
        this.statusReceiver.updateStatus(wifiIpAddress, 0)
        this.listenerSocket = ServerSocket();
        this.listenerSocket.reuseAddress = true;
        this.listenerSocket.bind(InetSocketAddress(wifiIpAddress, this.port))
        this.receiveThread = thread(start = true) {
                this.listenerThread()
        }
        //this.receiveThread.start()
    }
    fun listenerThread() {
        while(keepRunning) {
            var clientSocket: Socket = this.listenerSocket.accept()
            val clientSocketHandler = ClientSocketHandler(clientSocket)
            clientSocketHandler.respondAsync()
        }
    }
}

In ClientSocketHandler, the class grabs the input stream (to read the request from the remote client) and the OutputStream (to send data back to the client). Now I haven’t implemented the HTTP protocol. But in HTTP, the client will send a one or more lines that make up the request followed by a blank line. For now, my client handler reads from input stream until that blank line is encountered. Once received, it composes a response.

I’ve got the HTML string that the client is going to return hardcoded into the application. In the response string is converted to a byte array. The size of this array is needed for one of the response headers. The client will receive the size of the response in the header Content-Length. The header for the response is constructed as a string and converted to a byte array. Then the two arrays are sent back to the client (first the header, then the content). After the response is sent, the client has done its work.

package net.j2i.webserver.Service
import android.util.Log
import java.lang.StringBuilder
import java.net.Socket
import kotlin.concurrent.thread
class ClientSocketHandler {
    companion object {
        val TAG = "ClientSocketHandler"
    }
    private val clientSocket: Socket;
    private val responseThread:Thread
    constructor(sourceClientSocket:Socket) {
        this.clientSocket = sourceClientSocket;
        this.responseThread = thread( start = false) {
                this.respond()
        }
    }
    public fun respondAsync() {
        this.responseThread.run()
    }
    private fun respond() {
        val inputStream = this.clientSocket.getInputStream()
        val outputStream = this.clientSocket.getOutputStream()
        var requestReceived = false;
        while(inputStream.available()>0 &&  !requestReceived) {
            val requestLine = inputStream.bufferedReader().readLine()
            Log.i(ClientSocketHandler.TAG, requestLine)
            if(processRequestLine(requestLine)) {
            requestReceived = true;}
        }
        val sb:StringBuilder = StringBuilder()
        val sbHeader = StringBuilder()
        sb.appendLine(
            "<html>"+
                    "<head><title>Test</title></head>" +
                    "<body>Test Response;lkj;ljkojiojioijoij</body>"+
                   "</html>")
        sb.appendLine()
        val responseString = sb.toString()
        val responseBytes = responseString.toByteArray(Charsets.UTF_8)
        val responseSize = responseBytes.size
        sbHeader.appendLine("HTTP/1.1 200 OK");
        sbHeader.appendLine("Content-Type: text/html");
        sbHeader.append("Content-Length: ")
        sbHeader.appendLine(responseSize)
        sbHeader.appendLine()
        val responseHeaderString = sbHeader.toString()
        val responseHeaderBytes = responseHeaderString.toByteArray(Charsets.UTF_8)
        outputStream.write(responseHeaderBytes)
        outputStream.write(responseBytes)
        outputStream.flush()
        outputStream.close()
    }
    fun processRequestLine(requestLine:String): Boolean {
        if(requestLine == "") {
            return true;
        }
        return false;
    }
}

The interface that I mentioned, IStatusUpdateReceiver, is currently only being used to communicate the IP address on which the server is listening back to the UI.

package net.j2i.webserver.Service
interface IStatusUpdateReceiver {
    fun updateStatus(ipAddress:String, clientCount:Int);
}

Since the server runs on a different thread, before updating the UI I must make sure that UI related calls are being performed on the main thread. If you look in the class for MainActivity you will see that I created the  WebServer instance in the activity. I’m only doing this because it is a PoF. If you make your own application, implement this as a service.  I set the statusReceiver member of the WebServer to an annonymous class instance that does nothing more than update the IP address displayed in the UI. The call to set the text in the UI is wrapped in a runOnUiThread block. After this is setup, I call start() on the webserver to get things going.

package net.j2i.webserver
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.widget.TextView
import net.j2i.webserver.Service.IStatusUpdateReceiver
import net.j2i.webserver.Service.WebServer
class MainActivity : AppCompatActivity() {
    lateinit var webServer:WebServer
    lateinit var txtIpAddress:TextView
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        this.txtIpAddress = findViewById(R.id.txtIpAddress)
        this.webServer = WebServer(8888, this)
        this.webServer.statusReceiver = object:IStatusUpdateReceiver {
            override fun updateStatus(ipAddress:String, clientCount:Int) {
                runOnUiThread {
                    txtIpAddress.text = ipAddress
                }
            }
        }
        this.webServer.start()
    }
}

I was happy that my proof of concept worked. I haven’t yet decided if I am going to throw this away or continue working from this. In either case, there are a few things that I want to have in whatever my next version is. I do absolutely no exception handling or cleanup in this code. It needs to be able to timeout a connection and refuse connections if it gets inundated. I also want my next version to do actual processing of the incoming HTTP request and serve up content that has been saved to the device’s memory, such as a folder on the device’s memory card. While I am making this to serve up static content, I might add a few services to the server/device side, such as a socket server. That will require a lot more thought.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Alternate File Streams::Security Concerns?

I previously wrote about alternate data streams. Consider this an addendum to that post.

Alternate file streams are a Windows file system feature that allows additional sets of data to be attached to a file. Each data stream can be independently edited. But they are all part of the same file. Since the Windows UI doesn’t show information on these files, it raises a few perceivable security concerns. Whether it is the intention or not, information within an alternate file stream is concealed from all accept those that know to look for it. This isn’t limited to humans looking at the file system, but also other security products that may scan a file system.

It is possible to put executable content within an alternate file stream. Such executable content can’t be invoked directly from the UI, but it can be invoked through tools such as WMI. Given these security concerns that alternate streams may raise, why do I still use them? Those concerns are only applicable to how other untrusted entities may use the feature. But any action of an untrusted entity may be one of concerns.

I thought this concern was worth mentioning because if you try searching for more information on alternate file streams, these concerns are likely to come up on the first page of results.

Working With Alternative Data Streams::The “Hidden” Part of Your Windows File System on Windows

In the interest of keeping a cleaner file system, I sometimes try to minimize the number of files that I need to keep data organized. A common scenario where this goal is expressed is when writing an application that must sync with some other data source such as a content management system. Having a copy of the files from another system isn’t always sufficient. Sometimes additional data is needed for keeping the computers in a solution in sync. For a given file, I may need to also track an etag, CRC, or information on the purpose of a file. There are some common solutions for organizing this data. One is to have one additional file that contains all of the additional meta data for the files being synced. Another is to make an additional data file for each content file that contains this information. The solution that I prefer isn’t quite either of these. I prefer to have the data within an “alternate stream” of the same file. If the file get’s moved elsewhere on the filesystem, the additional data will move with it.

This is very much a Windows-Only solution. This will only work on the NTFS file system. If you attempt to access an alternative stream on a FAT32 file system, it will fail since that file system does not support them.

I most recently used this system of organization when I inherited a project that was, in my opinion, built with the wrong technology. In all fairness, many features of the application in question were implemented through scope creep. I reimplemented the application in about a couple of days using .Net technologies (this was much easier for me to do since, unlike the original developers, I had the benefit of having a complete requirements). There were a lot of aspects of that project that will be expressed in posts in the coming weeks.

The reason that I call this feature “hidden” is because the Windows UI does not give any visual indicator that a file has an additional data stream. There is no special icon. If you check on the size of a file, it will only show you the size of the main data stream. (In theory, one could add 3 gigs of data to the secondary stream of a 5 byte file and the OS would only report the file as being 5 bytes in size).

I’ll demonstrate accessing this stream from within the .Net Framework. There’s not built-in support for the alternative datastreams, but there is built in support for Windows file handles. Using a P/Invoke you can get a Windows file handle and then pass it to a .Net FileStream object to be used with all the other .Net features.

Every file has a default data stream. This is the stream you would see as being the normal file. It contains the file data that you are usually working with. With our normal concept of files and directories, files contain data and directories contain files, but no data directly. A file can contain any number of alternative data streams. Each one of these streams has a name of your choosing. Directories can have alternative data streams too!

To start experimenting with streams, you only need the command prompt. Open the command prompt and navigate to a directory in which you will place your experimental streams. type the following.

echo This is data for my alternative stream > readme.txt:stream

If you get a directory listing, we find that the files is listed as zero bytes in size.

c:\temp\streams>echo This is data for my alternative stream > readme.txt:stream

c:\temp\streams>dir
 Volume in drive C has no label.
 Volume Serial Number is 46FF-0556

 Directory of c:\temp\streams

10/11/2021  11:22 AM    <DIR>          .
10/11/2021  11:22 AM    <DIR>          ..
10/11/2021  11:22 AM                 0 readme.txt
               1 File(s)              0 bytes
               2 Dir(s)  108,162,564,096 bytes free

c:\temp\streams>

Is the data really there? From the command line, we can view the data using the more command (the type command doesn’t accept the syntax needed to refer to the stream).

c:\temp\streams>more < readme.txt:stream
This is data for my alternative stream

Windows uses alternative data streams for various system purposes. There are a number of names that you may encounter in files that Windows manages. This is a list of some well known stream names.

  • $DATA – This is the default stream. This stream contains the main (regular) data for a file. If you open README.TXT, this has the same effect as opening README.TXT:$DATA.
  • $BITMAP – data used for managing a b-tree for a directory. This is present on every directory.
  • $ATTRIBUTE_LIST – A list of attributes for a file.
  • $FILE_NAME – Name of file in unicode characters, including short name and hard links
  • $INDEX_ALLOCATION – used for managing large directories

There are some other names. In general, with the exception of $DATA, I would suggest not altering these streams.

Windows does give you the ability list alternative streams through PowerShell. We will look at that in a moment. For now, let’s say you had to make your own tool for managing such resources. The utility of this example is it is giving us an opportunity to see how we might work with these resources in code. One of the first tools that I think is useful would be a command line tool that would transfer data from one stream to another. With this tool, I can read from a stream and either write it to a console or to another stream. The only thing that affects where it is written is the file name. It only took a few minutes to write such a tool using C++. It is small enough to put the entirety of the code here.

#include <Windows.h>
#include <iostream>
#include <string>
#include <vector>
#include <list>

using namespace std;

const int BUFFER_SIZE = 1048576;

int main(int argc, CHAR ** argv)
{
    wstring InputStreamName = L"";
    wstring OutputStreamName = L"con:";
    wstring InputPrefix = L"--i=";
    wstring OutputPrefix = L"--o=";

    wstring Instructions =
        L"To use this tool, provide an input and an output file for it. The syntax looks like the following.\r\n\r\n"
        L"StreamStreamer --i=inputFileName --o=OutputFileName.ext::streamName\r\n\r\n";

    HANDLE hInputFile = INVALID_HANDLE_VALUE;
    HANDLE hOutputFile = INVALID_HANDLE_VALUE;

    vector<wstring> arguments(argc);

    for (auto i = 0; i < argc; ++i)
    {
        auto arg = string(argv[i]);
        arguments[i] = (wstring(arg.begin(), arg.end()));
    }

    for (int i = 0; i < argc; ++i)
    {
        if (!arguments[i].compare(0, InputPrefix.size(), InputPrefix))
            InputStreamName = arguments[i].substr(InputPrefix.size());
        if (!arguments[i].compare(0, OutputPrefix.size(), OutputPrefix))
            OutputStreamName = arguments[i].substr(OutputPrefix.size());
    }

    if ((!InputStreamName.size()) || (!OutputStreamName.size()))
    {
        wcout << Instructions;
        return 0;
    }

    hInputFile = CreateFile(InputStreamName.c_str(), GENERIC_READ, FILE_SHARE_READ, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
    if (hInputFile != INVALID_HANDLE_VALUE)
    {
        hOutputFile = CreateFile(OutputStreamName.c_str(), GENERIC_WRITE, FILE_SHARE_READ, 0, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
        if (hOutputFile != INVALID_HANDLE_VALUE)
        {
            vector<char> buffer = vector<char>(BUFFER_SIZE);
            DWORD bytes_read = 0;
            DWORD bytes_written = 0;
            do {
                bytes_read = 0;
                if (ReadFile(hInputFile, &buffer[0], BUFFER_SIZE, &bytes_read, 0))
                    WriteFile(hOutputFile, &buffer[0], bytes_read, &bytes_written, 0);
            } while (bytes_read > 0);
            CloseHandle(hOutputFile);
        }
        CloseHandle(hInputFile);
    }
}


Usage is simple. The tool takes two arguments; an input stream name and an output stream name are passed prefixed with either --i= or --o=. If no output name is specified, it defaults to an output name of con:. This name, con:, refers to a console. That had been a reserved file name for the console. I have the vague idea there may be some other console name, but could not find it. con: is a carry-over of the DOS days of 30+ years ago. It worked then, and it still works now. So I’m sticking with it. Note that there is

After compiling this, I can use it to retrieve the text that I attached to the stream earlier.

c:\temp\streams>StreamStreamer.exe --i=readme.txt:Stream
This is data for my alternative stream

c:\temp\streams>

I can also use it to take the contents of some other arbitrary file and attach it to an existing file in an alternative stream. In testing, I took a JPG I had of the moon and attached it to a file. Then I extracted it from that alternative stream and wrote it to a different regular file just to ensure that I had an unaltered data stream.

c:\temp\streams>StreamStreamer.exe --i=Moon.JPG --o=readme.txt:moon

c:\temp\streams>StreamStreamer.exe --i=readme.txt:moon --o=m.jpg

c:\temp\streams>dir *.jpg
 Volume in drive C has no label.
 Volume Serial Number is 46FF-0556

 Directory of c:\temp\streams


10/12/2021  02:54 PM         3,907,101 m.jpg
12/21/2020  07:46 PM         3,907,101 Moon.JPG
               3 File(s)      7,814,202 bytes
               0 Dir(s)  105,063,383,040 bytes free

c:\temp\streams>

You will probably want the ability to see what streams are inside of a file. You could download the Streams tool from System Internals, or you could use PowerShell. PowerShell has built in support for streams. I’ll be using that throughout out the rest of this writeup. To view streams with PowerShell, use the Get-Item command with the -stream * parameter.

PS C:\temp\streams> Get-Item .\readme.txt -stream *


PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt::$DATA
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt::$DATA
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\streams\readme.txt
Stream        : :$DATA
Length        : 15

PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt:moon.jpg
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt:moon.jpg
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\streams\readme.txt
Stream        : moon.jpg
Length        : 3907101

PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt:stream
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt:stream
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\s\readme.txt
Stream        : stream
Length        : 43



PS C:\temp\streams>

If you are making an application that uses alternative streams, you will want to know how to list the streams from within it also. That is also easy to do. Since the much beloved Windows Vista we’ve had a Win32 API for enumerating streams. The functions FindFirstStreamW/FindFirstStreamTransactedW and FindNextStreamW will do this for you. Take note that there only exist Unicode versions of these functions. ASCII variations are non-existent. If you have ever used FindFirstFile or FindNextStreamW the usage is similar.

Two variables are needed to search for streams. One variable is a HANDLE that is used as an identifier for the resources and state of the search request. The other is a WIN32_FIND_STREAM_DATA structure into which data on streams that were found are put. FindFirstStreamW will return a handle and populate a WIN32_FIND_STREAM_DATA with the first stream it finds. From there, each time FindNextStreamW is called with the HANDLE that had been returned earlier, it will populate a WIN32_FIND_STREAM_DATA with the information on the next stream. When no more streams are found, FindNextStreamW will have a return value of ERROR_HANDLE_EOF.

#include <Windows.h>
#include <iostream>
#include <vector>
#include <string>

using namespace std;

int main(int argc, char**argv)
{
    WIN32_FIND_STREAM_DATA fsd;
    HANDLE hFind = NULL;
    vector<wstring> arguments(argc);

    for (auto i = 0; i < argc; ++i)
    {
        auto arg = string(argv[i]);
        arguments[i] = (wstring(arg.begin(), arg.end()));
    }

    if (arguments.size() < 2)
        return 0;
    wstring fileName = arguments[1];

    try {
        hFind = FindFirstStreamW(fileName.c_str(), FindStreamInfoStandard, &fsd, 0);
        if (hFind == INVALID_HANDLE_VALUE) throw ::GetLastError();
        const int BUFFER_SIZE = 8192;
        WCHAR buffer[BUFFER_SIZE] = { 0 };
        WCHAR fileNameBuffer[BUFFER_SIZE] = { 0 };

        wcout << L"The following streams were found in the file " << fileName << endl;
        for (;;)
        {
            swprintf(fileNameBuffer, BUFFER_SIZE, L"%s%s", fileName.c_str(), fsd.cStreamName);
            swprintf_s(buffer,BUFFER_SIZE, L"%-50s %d", fileNameBuffer, fsd.StreamSize);
            wstring formattedDescription = wstring(buffer);
            wcout << formattedDescription << endl;

            if (!::FindNextStreamW(hFind, &fsd))
            {
                DWORD dr = ::GetLastError();
                if (dr != ERROR_HANDLE_EOF) throw dr;
                break;
            }
        }
    }
    catch (DWORD err)
    {
        wcout << "Oops, Error happened. Windows error number " << err;
    }
    if (hFind != NULL)
        FindClose(hFind);
}

For my actual application purposes, I don’t need to query the streams in a file. The streams of interest to me will have a predetermined name. Instead of querying for them, I attempt to open the stream. If it isn’t there, I will get a return error code indicating that the file isn’t there. Otherwise I will have a file HANDLE for reading and writing. With what I’ve written so far, you could begin using this feature in C/C++ immediately. But my target is the .Net Framework. How do we use this information there?

In Win32, you can read or write these alternative data streams as you would any other file by using the correct stream name. If you try that within the .Net Framework, it won’t work. Before even hitting the Win32 APIs, the .Net Framework will treat the stream name as an invalid file name. To work around this, you’ll need to P/Invoke the Win32 API for opening files. Thankfully, once you have a file handle, the .Net Framework will work with that file handle just fine and allow you to use all the methods that you would with any other stream.

Before adding the P/Invoke that are needed to use this functionality in .Net, let’s defined a few numerical constants.

    public partial class NativeConstants
    {
        public const uint GENERIC_WRITE = 1073741824;
        public const uint GENERIC_READ = 0x80000000;
        public const int FILE_SHARE_DELETE = 4;
        public const int FILE_SHARE_WRITE = 2;
        public const int FILE_SHARE_READ = 1;
        public const int OPEN_ALWAYS = 4;
    }

These may look familiar. These constants have the same names as constants that were used in C when calling the Win32 API. These constants, as their names suggest, are used to indicate the mode in which files should be opened. Now fot the P/Invokes to the calls to open the files.

    public partial class NativeMethods
    {
        [DllImportAttribute("kernel32.dll", EntryPoint = "CreateFileW")]
        public static extern System.IntPtr CreateFileW(
            [InAttribute()][MarshalAsAttribute(UnmanagedType.LPWStr)] string lpFileName,
            uint dwDesiredAccess,
            uint dwShareMode,
            [InAttribute()] System.IntPtr lpSecurityAttributes,
            uint dwCreationDisposition,
            uint dwFlagsAndAttributes,
            [InAttribute()] System.IntPtr hTemplateFile
        );

    }

That’s it! That is the only P/Invoke that is needed.

The data that I was writing to these files was metadata on on files for matching them up with entries in a CMS. This includes information like the last date that it was updated on the CMS, a CRC or ETAG for knowing if the version on the local computer is the same as the one on the CMS, and a title for presenting to the user (Which may be different than the file name itself). I’ve decided that the name of the stream in which I am placing this data to simply be meta. I’m using JSON for the data encoding. For your purposes, you could use any format that fits your application. Let’s open a stream for writing.

I’ll use the Win32 CreateFileW function to get a file handle. That handle is passed to the .Net FileStream constructor. From there, there is no difference in how I would read or write from this

var filePath = Path.Combine(ds.DownloadCachePath, $"{fe.ID}{extension}");
FileInfo fi = new FileInfo(filePath);
var fullPath = fi.FullName;
if (fi.Exists)
{
    var metaStream = NativeMethods.CreateFileW(
        $"{fullPath}:meta",
        NativeConstants.GENERIC_READ,
        NativeConstants.FILE_SHARE_READ,
        IntPtr.Zero,
        NativeConstants.OPEN_ALWAYS,
        0,
        IntPtr.Zero);
    using (StreamReader sr = new StreamReader(new FileStream(metaStream, FileAccess.Read)))
    {
        try
        {
            var metaData = sr.ReadToEnd();
            if (!String.IsNullOrEmpty(metaData))
            {
                var data = JsonConvert.DeserializeObject<FileEntry>(metaData);
                fe.LastModified = data.LastModified;
            }
        } catch(IOException exc)
        {
        }
    }
}

I said earlier that this is a Windows-Only solution and that it doesn’t work on the Fat32 file system. The two implications of this is that if you are using this in in a .Net environment that is running on another operating system this won’t work. It will likely fail since the P/Invokes won’t be able to bind. The other potential problem demands an active check within the code. If a a program using alternative file streams is given a FAT32 file system to work with, it should detect that it is on the wrong type of file system before trying to perform actions that will fail. Detecting the file system type only requires a few lines of code. In .Net, the following code will take the path of the currently running assembly, see what drive it is on, and retrieve the file system type.

 String assemblyPath = typeof(FileSystemDetector).Assembly.Location;
 String driveLetter = assemblyPath[0].ToString();
 DriveInfo driveInfo = new DriveInfo(driveLetter);
 string fsType = driveInfo.DriveFormat;
 return fsType;

If this code is run on a a drive using the NTFS file system, the return type will be the string value NTFS. If it is anything else, know that attempts to access alternative streams will fail. If you try to copy these file to a FAT32 drive, Windows will warn you of a loss of data. Only the default streams will be copied to the FAT32 drive.

In the next posts on this I will demonstrate a practical use. I’ll also talk about what some might see as a security concern with alternate file streams.

Using an iPhone as a Beacon

As with any project, I had a list of milestones and dates on which I expected to hit them leading up to project completion. One of the elements for the project was an application that needed to detect its proximity to other devices to select a device for interaction. I planned to use iBeacons for this, and had a delivery date on some beacons for development. The delivery date came, a box with the matching tracking number came, but there were no iBeacons inside. Instead, there was a phone case. This isn’t the first time I have ordered one item and Amazon has sent me another. I went online and filled out a form to have the order corrected. They stated I would have the item in another 5 days. In the mean time, I didn’t want to let progress slip. I’ve heard several times “You can use an iPhone as an iBeacon.” I now had motivation to look into this. You can in fact use a phone as an iBeacon. But you have to write an application yourself to use it this way.

When I took a quick look in the App store, I couldn’t find an app for this purpose. So I decided to make an application myself. It isn’t hard. In my case, I’m emulating an iBeacon as a stand-in for actual hardware. But there are other reasons you might want to do this. For example, if I were using an iPad as a display showing more information on an exhibit users browsing the exhibit could interact with the content on the display using their own phone. The iBeacon signal could be used so that the user’s phone knows which display it is close to, allowing them to trigger interactions from their own phone (a valuable method of interaction given the higher concerns on hygiene and concerns over shared touch surfaces).

Beacons are uniquely identified by three pieces of data; the UUID, Major number, and Minor number. A UUID, or Universally Unique ID, is usually shared among a group of iBeacons that are associated with the same entity. The usage of the Major and Minor numbers is up to the entity. Usually the Major will be used to group related iBeacons together with the Minor number being used as a unique ID within the the set. I’ll talk more about these numbers in another post.

For my iPhone application, I have created a few variables to hold the simulated Beacon’s identifiers. I also have a variable to track whether the iBeacon is active, and have defined a Zero UUID to represent a UUID that has not been assigned a value.

class BeaconManager {
    
    var objectWillChange = PassthroughSubject<Void, Never>()
    
    let ZeroUUID = UUID.init(uuidString: "00000000-0000-0000-0000-000000000000")
    
    var BeaconUUID = UUID(uuidString: "00000000-0000-0000-0000-000000000000") {
        didSet { updateUI() }
    }
    
    var Major:UInt16 = 100 {
        didSet { updateUI() }
    }
    
    var Minor:UInt16 = 2 {
        didSet { updateUI() }
    }
    
    var IsActive:Bool = false {
        didSet { updateUI() }
    }
}

I am going to use Swift UI for displaying information. That is why setting these variables also triggers a call to updateUI(). There are some callbacks that are made by Apple’s iBeacon API. For these, I’ll need to also need to implement the CBPeripheralManagerDelegate. This protocol is defined in CoreBluetooth. We also need permission for the device to advertise its presence over Bluetooth. permission. Bluetooth is often used for indoor location (which will be my ultimate intention). Let’s get all these other things in place. The necessary import statements and inheritance will look like the following.

import Foundation
import CoreLocation
import CoreBluetooth
import Combine

class BeaconManager: NSObject, CBPeripheralManagerDelegate, Identifiable, ObservableObject {
   ...
}

For the Bluetooth permission that the application needs, a new String value must be added to the Info.plist. The item’s key is NSBluetoothAlwaysUsageDescription. The value should be a text description that will be presented to the user letting them know why the application is requesting Bluetooth permissions.

I want the simulated iBeacon to have the same value every time the application runs. At runtime, the application is going to check whether there is a UUID already saved in the settings. If there is not one, then it will generate a new UUID and save it to the settings. From then on, it will always use the same ID. I do the same thing with the Major and Minor numbers using the UInt16.random(in:) function. This information together is used for create a CLBeaconRegion.

    func createBeaconRegion() -> CLBeaconRegion {
        let settings = UserDefaults.standard
        if let savedUUID = settings.string(forKey: BEACON_UUID_KEY) {
            if let tempBeaconUUID = UUID(uuidString: savedUUID) {
                BeaconUUID = tempBeaconUUID
            }
        }
        if(BeaconUUID == nil){
            BeaconUUID = UUID()
            settings.setValue(BeaconUUID!.uuidString, forKey: BEACON_UUID_KEY)
            settings.synchronize()
        }   
        let majorValue = settings.integer(forKey: BEACON_MAJOR_KEY) ?? 0
        if(majorValue == 0) {
            Major = UInt16.random(in: 1...65535)
            settings.setValue(Major, forKey: BEACON_MAJOR_KEY)
        }   
        let minorValue = settings.integer(forKey: BEACON_MINOR_KEY) ?? 0
        if(minorValue == 0) {
            Minor = UInt16.random(in: 1...65535)
            settings.setValue(Minor, forKey: BEACON_MINOR_KEY)
        }   
        print(BeaconUUID?.uuidString)
        let major:CLBeaconMajorValue = Major
        let minor:CLBeaconMinorValue = Minor
        let beaconID =  "net.domain.application"
        return CLBeaconRegion(proximityUUID:BeaconUUID!, major: major, minor: minor, identifier: beaconID)
    }

When I first tried to use the CLBeaconRegion it failed, and I was confused. After a bit more reading, I found out why. The Bluetooth radio can take a moment to initialize into the mode that the code needs it for. Trying to use it too soon can result in failure. To fix this, wait for a callback to CBPeripheralManagerDelegate::peripheralManagerDidUpdateState(_ peripheral:CBPeripheralManager).In the handler for this callback, check if the .state of the peripheral variable .poweredOn. If it is, then we can start using our CLBeaconRegion. We can call startAdvertising on the CBPeripheralManager object to make the iBeacon visible. When we want the phone to no longer act as an iBeacon, we can call the stopAdvertising. Note that the device will only continue to transmit while the application has focus. If the application gets pushed to the background, the phone sill stop presenting as an iBeacon.

    func peripheralManagerDidUpdateState(_ peripheral: CBPeripheralManager) {
        if(peripheral.state == .poweredOn) {
            let  beaconRegion = createBeaconRegion()
            let peripheralData = beaconRegion.peripheralData(withMeasuredPower: nil)
            peripheral.startAdvertising(((peripheralData as NSDictionary)as! [String:Any]))
            IsActive = true
        }
    }

    func start() {
        if(!IsActive) {
            peripheral = CBPeripheralManager(delegate:self, queue:nil)
        }
    }
    
    func stop() {
        if(IsActive) {
            if (peripheral != nil){
                peripheral!.stopAdvertising()
            }
            IsActive = false
        }
    }

The code for the class I used for simulating the iBeacon follows. For the simplest use case, just instantiate the class and call the start() method. Provided the Info.plist has been populated with a value for NSBluetoothAlwaysUsageDescription and the user has granted permission, it should just work. In the next post, lets look at how to detect iBeacons with an iOS application. The next application isn’t limited to only detecting iPhones acting as iBeacons. It will work with real iBeacons too. As of now I have gotten my hands on a physical iBeacon compatible transmitter. While any iBeacon transmitter should work, if you would like to follow along with the same iBeacon that I am using, you can purchase the following from Amazon (affiliate link).

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet