Creating a Service on a Raspberry Pi or Jetson Nano

Creating a service on a Raspberry Pi or a Jetson is easier than I thought. At the same time, there is still a lot of information to sort through. I’m still exploring the various settings that can be applied to a service. But I wanted to share the information that I thought would be immediately useful. While I was motivated to explore this based on something I was doing on a Jetson Nano, the code and instructions work identically without any need for modification on a Raspberry Pi.

I have a Jetson Mate. The Jetson Mate is an accessory for the Jetson Nano or Jetson Xavier NX Modules. Up to 4 modules can be placed within the Jetson mate to form a cluster. Really the Jetson Mate is just a convenient way to power multiple Jetsons and connect them to a wired network. It contains a 5-port switch so that a single Network cable can be used to connect all of the modules. Despite the Jetsons being in the same box, they don’t have an immediate way to know about each other. Reading the documentation from Seeed Studio, they suggest logging into your router and finding the IP addresses there.

That approach is fine when I’m using the Jetsons from my house; I have complete access to the Network here. But that’s not always possible. On some other networks I may not have access to the router settings. I made a program that would let the Jetson’s announce their presence over UDP Multicast. This could be useful on my Pis also; I run many of them as headless units. I needed for this program to start automatically after the device was powered on and to keep running. How do I do that? By making it a service.

There are several ways that one could schedule a task to run on Linux. I’m using systemd. Systemd was designed to unify service configurations across Linux distributions. The information shown here has applicability well beyond the Pi and Jetson.

The details of how my discovery program works is a discussion for another day. Let’s focus on what is necessary for making a service. For a sample service, let’s make a program that does nothing more than increment a variable and output the new value of the variable. The code that I show here is available on GitHub ( https://github.com/j2inet/sample-service ). But it is small enough to place here also. This is the program.

#include <iostream>
#include <thread>

using namespace std;

int main(int argc, char** argv) 
{
    int counter = 0;
    while(true)
    {
        cout << "This is cycle " << ++counter << endl;
        std::this_thread::sleep_for(std::chrono::seconds(10));
    }
}

This program counts, outputting a digit once ever ten seconds. To build the program, you will need to have cmake installed. To install it, use the following command at the terminal.

sudo apt-get install cmake -y

Once that is installed, from the project directory only a couple of commands are needed to compile the program.

cmake ./
make

The program is built, and a new executable named service-sample is now in the folder. If you run it, you will see the program counting. Press CTRL-C to terminate the program. Now we are going to make it into a service.

To make a service, you will need to copy the executable to a specific folder and also provide a file with the settings for the service. For the service settings, I’ve made a file named similarly to the executable. This isn’t a requirement. But it’s something that I’ve chosen to do for easier association. In a file named service-sample.service I’ve place the settings for the service. Many of these settings are technically optional; you only need to set many of them if your specific service is dependent on them. I’m showing more than is necessary for this service because I think some of these settings will be useful to you for other projects and wanted to provide an example.

[Unit]
Description=Counting service.
Wants=network.target
After=syslog.target network-online.target

[Service]
Type=simple
ExecStart=/usr/local/bin/service-sample
Restart=on-failure
RestartSec=10
KillMode=process

[Install]
WantedBy=multi-user.target

Here are what some of those settings mean. Note that I also describe some other settings that are not used, but available for you to consider. You can also see documentation for this file in the man pages.

[Unit] section

Documentation viewable with the following command

man systemd.unit

SettingMeaning
DescriptionA short text description of the service.
DocumentationURIs at which documentation for the service can be found
RequiresOther units that will be activated or deactivated in conjunction with this unit
WantsExpress weak dependencies. Will try to activate these dependencies first, but if those dependencies fail, this unit will be unaffected
ConflictsThis setting prevents this unit from running at the same time as a conflicting unit
After/BeforeUsed to express the order in which units are started.These settings contain a space delimited list of unit names.

[Install] Section

Documentation for the [Install] section is viewable at the following URL

SettingMeaning
RequiredBy / WantedByStarts the current service if any of the listed services are started. WantedBy is a weaker dependency than RequiredBy.
AlsoSpecifies services that are to be started or disabled along with this service

[Service] Section

Documentation for the [Service] section is viewable from the following URL.

man systemd.service

SettingMeaning
Type* simple – (default) starts the service immediately
* forking – the service is considered started once the process has forked and the parent has exited
* oneshot – similar to simple. Assumes service does its job and exits.
* notify – considers a service started when it sends a signal to systemd


ExecStartCommands with arguments to execute to start the service. Note that when Type=oneshot that multiple commands can be listed and executed sequentially.
ExecStopCommands to execute to stop the service
ExecReloadCommands to execute to trigger a configuration reload of the service
RestartWhen this option is enabled, the service will be restarted when the service process exits or is killed
RemainAfterExitWhen True, the service is considered active even after all of its processes have exited. Mostly used with Type=oneshot.

Deploying

Having the executable and this service file are not themselves enough. They must also be moved to an appropriate location and the service must be activated. I’ve placed the steps for doing this in a script. This script is intentionally a bit verbose to make it clear what the script is doing at any time. The first thing that this script does is terminate the service. While this might sound odd given that we haven’t installed the service yet, I do this to make the script rerunnable. If this is not the first time that the script has run, it is possible that the service process is running. To be safe, I terminate it.

Next, I copy files to their appropriate locations. For this simple service those files are one executable binary and the service settings. The executable is placed in /usr/local/bin. The service settings are copied to /etc/systemd/system/. On the service settings, the permissions on it are changed with chmod. This will ensure the owner has read/write permissions and the group has read permissions.

With the files for the service in place, we next ask systemd to reload the service definitions. I then probe the status for my service. While my service isn’t running, I should see it listed. I then enable the service (so that it will run on system startup) and then start the service (so that I don’t need to reboot to see it running now) and then probe the system status again.

echo "stopping service. Note that the service might not exists yet."
sudo systemctl stop service-sample

echo "--copying files to destination--"
sudo cp ./service-sample /usr/local/bin
sudo cp ./service-sample.service /etc/systemd/system/service-sample.service
echo "--setting permissiongs on file--"
sudo chmod 640 /etc/systemd/system/service-sample.service
echo "--reloading daemon and service definitions--"
sudo systemctl daemon-reload
echo "--probing service status--"
sudo systemctl status service-sample
echo "--enabling service--"
sudo systemctl enable service-sample
echo "--starting service service status--"
sudo systemctl start service-sample
echo "--probing service status--"
sudo systemctl status service-sample

After the service is installed and running, you can use the command for probing the status to see what it is up too. The last few lines that the service has outputted will display with the service information. Probe the service status at any time using this command.

sudo systemctl status service-sample

Sample output from the command follows.

pi@raspberrypi:~ $ sudo systemctl status service-sample
● service-sample.service - Counting service.
   Loaded: loaded (/etc/systemd/system/service-sample.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-03-09 15:57:12 HST; 12min ago
 Main PID: 389 (service-sample)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/service-sample.service
           └─389 /usr/local/bin/service-sample

Mar 09 16:09:29 raspberrypi service-sample[389]: This is cycle 361
Mar 09 16:09:31 raspberrypi service-sample[389]: This is cycle 362
Mar 09 16:09:33 raspberrypi service-sample[389]: This is cycle 363
Mar 09 16:09:35 raspberrypi service-sample[389]: This is cycle 364
Mar 09 16:09:37 raspberrypi service-sample[389]: This is cycle 365
Mar 09 16:09:39 raspberrypi service-sample[389]: This is cycle 366
Mar 09 16:09:41 raspberrypi service-sample[389]: This is cycle 367
Mar 09 16:09:43 raspberrypi service-sample[389]: This is cycle 368
Mar 09 16:09:45 raspberrypi service-sample[389]: This is cycle 369
Mar 09 16:09:47 raspberrypi service-sample[389]: This is cycle 370
pi@raspberrypi:~ $
Screenshot of service output. Note the green dot indicates the service is running.

The real test for the service comes after reboot. Once you have the service installed and running on your Jetson or your Pi, reboot it. After it boots up, probe the status again. If you see output, then congratulations, your service is running! Now that a service can be easily created and registered, I’m going to refine the code that I used for discovery of the Pis and Jetsons for another post.

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Emoji Only Text Entry on Android

Android supports a number of different text input types. If you create a text field in which someone is intended to enter a phone number, address, email address, you can set the text input type on the text field to have the keyboard automatically restrict what characters it presents to the user.

<EditText android:inputType="phone" />

I was working on something for which I needed to ensure that the user selected an emoji. Unfortunately, there’s not an input type to restrict a user to emoji. How do I ensure taht the user can only enter emoji? I could implement my own keyboard that only displays emoji, but for the time being I do not what to implement such a thing for the application that I’m building. There are a number of different possible solutions for this. The one I chose was to make a custom text input field and an InputFilter for it.

Making a custom field may sound like a lot of work, but it isn’t. The custom field itself is primary declarations with only a single line of initialization code that applies the filter. The real work is done in the filter itself. For a custom InputFilter, make a class that derives from InputFilter. There is a single method on the class to define named filter.

The filter method receives the text sequence that is going to be assigned to the text field. If we want to allow the field value to be assigned to the text field, the function can return null. If there is a character of which I don’t approve, I’ll return an empty string which will clear the text field. If I wanted to replace a character, I could return a string in which I had performed the replacement. The value returned by this function will be applied to the text field.

In my implementation for this function, I step through each character in the string (passed as a CharSequence, a String is a type of CharSequence) and check the class of each character. If the character is not of acceptable class, then I return an empty string to clear the text field. For your purposes, you may want to strip characters out and return the resulting string.

The function Character::getType will return the character type or class. To ensure that the character is an emoji, I check to see if the type value equals to Character::SURROGATE or Character::OTHER_SYMBOL.

private class EmojiFilter : InputFilter {

    override fun filter(
        source: CharSequence?,
        start: Int,
        end: Int,
        dest: Spanned?,
        dstart: Int,
        dend: Int
    ): CharSequence? {
        for( i in start .. end-1) {
            var type = Character.getType(source!!.get(i)).toByte()
            if(type != Character.SURROGATE && type != Character.OTHER_SYMBOL) {
                return "xx";
            }
        }
        return null;
    }
}

Now that the filter class is defined, I can create my custom text field. It has next to no code within it.

public class FilteredEditText : androidx.appcompat.widget.AppCompatEditText {

    constructor(context:Context, attrs: AttributeSet?, defStyle:Int):super(context,attrs,defStyle)
    {
    }

    constructor(context:Context, attrs: AttributeSet?):super(context,attrs)
    {
    }

    constructor(context:Context) : super(context) {
    }

    init{
        filters = arrayOf(EmojiFilter())
    }

}

Great, we have out custom class! Now how do we use it in a layout? We can use it in the layout by declaring a view with the fully qualified name.

    <net.j2i.emojidiary.FilteredEditText
        android:text="😀"
/>

And that’s it. While Android doesn’t give the option of showing only the Emoji characters, with this in place I’m assured that a user will only be able to enter Emoji characters.

Working with the Phidgets RFID Reader

I recently worked on a project that made use of Phidgets hardware. Phidgets has many offers for interfacing hardware sensors to a computer using USB. Provided that the drive is installed, using the service is pretty straight forward. They have software components available for a variety of languages and operating systems. For the project that I was working on, a computer would have multiple Phidget RFID readers. When there are multiple instances of hardware on a system, a specific instance can be addressed through it’s serial number. On this project, the serial numbers were stored in a configuration file. For the single machine on which this project was deployed, that was fine. Once that was a success the client wanted the software deployed to another 7 machines.

The most direct strategy for this would be to make a configuration file for each machine that had its specific serial numbers in it. I did this temporarily, but I am not a fan of having differences in deployment files. A simple mistake could result in the wrong configuration being deployed and a non-working software system. Deployments would be frequent because users were interacting with the software during this phase of development. An unexpected problem we encountered was someone disconnected hardware from one computer and moved it to another. Why they decided to perform this hardware swap is not known to me. But it resulted in two machines sensors that were no longer responsive.

After a little digging I found a much better solution.

For some reason the Phidgets code examples that I encounter don’t mention this, but it is also possible to get notification of a Phidgets device being connected or disconnected from the computer and the serial number of the device being references. I used a .Net environment for my development, but this concept is applicable in other languages too. I’ll be using .Net in my code examples.

In the .Net environment, Phidget’s offers a class named Manager in their SDK. The Manager class when instantiated raises Attached and Detached events each time an item of hardware is connected or disconnected from the system. If the class is instantiated after hardware has already been connected, it will raise Attached events for each item of hardware that is already there. The event argument object contains a member of the class Phidget that among other data contains the serial number of the item found and what type of hardware it is. For the application I was asked to update, I was only interested in the RFID reader hardware. I had the code ignore any Phidget of any other type. While it is unlikely the client would randomly connect such hardware to the machines running the solution, for the sake of code reuse it is better to have such protection in the application.

Let’s make an application that will list the RFID readers and the RFID readers detected by each tag. I’m writing this in WPF. Not shown here is some of the typical code you might find in such a project such as a ViewModel base class. In my sample project, I wrapped the Phidgets Manager class in another class that will also keep a list of the Phidget object instances that it has found. An abbreviated version of the class follows. The Phidget’s Manager class will start raising Attached and Detached events once its Open() method has been called. If there is already hardware attached, expect events to be raised immediately.

    public partial class PhidgetsManager:IDisposable
    {
        Manager _manager;
        List<Phidget> _phidgetList = new List<Phidget>();
        public PhidgetsManager()
        {
            _manager = new Manager();
            _manager.Attach += _manager_Attach;
            _manager.Detach += _manager_Detach;
        }

        private void _manager_Detach(object sender, Phidget22.Events.ManagerDetachEventArgs e)
        {
            _phidgetList.Remove(e.Channel);
            OnPhidgetDetached(e.Channel);
        }

        private void _manager_Attach(object sender, Phidget22.Events.ManagerAttachEventArgs e)
        {
            _phidgetList.Add(e.Channel);
            OnPhidgetAttached(e.Channel);
        }
        public enum Action
        {
            Connected, 
            Disconnected
        }

        public class PhidgetsActionEventArgs
        {
            public Phidget Phidget { get; internal set;  }
            public Action Action { get; internal set; }
        }

        public delegate void PhidgetsActionEvent(object sender, PhidgetsActionEventArgs args);
        public event PhidgetsActionEvent DeviceAttached;
        public event PhidgetsActionEvent DeviceDetached;

        protected void OnPhidgetAttached(Phidget p)
        {
            if (DeviceAttached != null)
            {
                var arg = new PhidgetsActionEventArgs()
                {
                    Action = Action.Connected,
                    Phidget = p
                };
                DeviceAttached(this, arg);
            }
        }

        protected void OnPhidgetDetached(Phidget p)
        {
            if (DeviceDetached != null)
            {
                var arg = new PhidgetsActionEventArgs()
                {
                    Action = Action.Connected,
                    Phidget = p
                };
                DeviceDetached(this, arg);
            }
        }

    }

In my MainViewModel I only wanted to capture the RFID readers. It has its own list for maintaining these. When a device of the right type is found, I create anew RFID number and assign its DeviceSerialNumber. When the class’s Open() method is called, it will attach to the correct hardware since the serial number has been set. The RFID instance is added to my list. My list uses a wrapper class that exposes the device serial number and the current RFID tag that the device sees. This is exposed through a ViewModel object so that the UI will automatically update.

    public class RFIDReaderViewModel: ViewModelBase, IDisposable
    {
        RFID _reader;

        public RFIDReaderViewModel(Phidget phidget)
        {
            if(phidget == null)
            {
                throw new ArgumentNullException("phidget");
            }
            if(phidget.ChannelClass != ChannelClass.RFID)
            {
                throw new ArgumentException($"Phidget must be an RFID Reader. The received item was a {phidget.ChannelClassName}");
            }
            this.Reader = new RFID();
            this.Reader.DeviceSerialNumber = phidget.DeviceSerialNumber;
            this.Reader.Tag += Reader_Tag;
            this.Reader.TagLost += Reader_TagLost;
            this.Reader.Open();
        }

        private void Reader_TagLost(object sender, Phidget22.Events.RFIDTagLostEventArgs e)
        {
            Dispatcher.CurrentDispatcher.Invoke(() =>
            {
                CurrentTag = String.Empty;
            });
        }

        private void Reader_Tag(object sender, Phidget22.Events.RFIDTagEventArgs e)
        {
            Dispatcher.CurrentDispatcher.Invoke(() =>
            {
                CurrentTag = e.Tag;
            });
        }

        public RFID Reader
        {
            get { return _reader;  }
            set { SetValueIfChanged(() => Reader, () => _reader, value); }
        }
        String _currentTag;
        public String CurrentTag
        {
            get { return _currentTag;  }
            set { SetValueIfChanged(() => CurrentTag, ()=>_currentTag, value);  }
        }

        public void Dispose()
        {
            try
            {
                this.Reader.Close();
            }
            catch (Exception exc)
            {

            }
        }
    }

All that’s left is the XAML. In the XAML for now, I’m only interested in listing the serial numbers and the tag strings.

<UserControl x:Class="PhidgetDetectionDemo.Views.MainView"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:local="clr-namespace:PhidgetDetectionDemo.Views"
             >
    <Grid>
        <ListView ItemsSource="{Binding ReaderList}">
            <ListView.ItemTemplate>
                <DataTemplate>
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition Width="*" />
                            <ColumnDefinition Width="*" />
                        </Grid.ColumnDefinitions>
                        <TextBlock Text="{Binding Reader.DeviceSerialNumber}" />
                        <TextBlock Grid.Column="1" Text="{Binding CurrentTag}"/>
                    </Grid>
                </DataTemplate>
            </ListView.ItemTemplate>
        </ListView>
    </Grid>
</UserControl>

With that in place, now when I run the project, I get live updates of RFID tags coming into or out of range of each of the readers connected.

If you try to run this though, you may encounter a problem. The RFID readers (as the name implies) use radio frequencies for their operation. If they are close to each other, they may interact and interfere with each other. Don’t worry, this doesn’t mean that you can’t use them in close proximity. In my next entry I’ll show how to deal with RFID readers that are close to each other and mitigate interference through software.


Android Multi-Phone Debugging

I’m working on an application that uses Android’s WiFi P2P functionality so that two phones can communicate directly with each other. Because of the nature of this application, I need to have two instances of this program running at once. The only problem is that Android Studio only let’s me have one debug target at a time. I thought of a few potential solutions.

  • * Deploy to one phone, start the application, then start debugging on the second
  • * Use a second computer and have each phone connected to a computer

Both of these solutions have shortcomings. They are both rather cumbersome. While this isn’t a supported scenario yet, there’s a better solution. We can use two instances of Android Studio on the same computer to open the project. We need a little bit of support from the operating system to pull this off. Android will otherwise see that we are opening a project that is already open otherwise. Before doing this, we need to make a symbolic link to our project.

A symbolic link is an entry in the file system that has its own unique path but points to existing data on the file system. Using symbolic links, a single file could be accessed through multiple paths. To Android Studio, these are two separate projects. But since it is the same data streams on the file system, the two “instances” will always be in sync. There are some files are are going to be unique to one instance or the other, but we will cover that in a

Symbolic links are supported on Windows, macOS, and on Linux. To make a symbolic link on macOS, use the ln command.

ln -s /original/path /new/linked/path

On Windows, use the mklink command.

mklink /j  c:\original\path c:\linked\path

Make sure that Android Studio is closed. Make a copy of your project’s folder. In the new copy that you just made, you are going to erase most of the files. Delete the folder app/src in the copy. Open a terminal and navigate to the root of the copied project. In my case the original project is called P2PTest and the other is called P2PCopy. To make the symbolic link for the src file I use the following command.

ln -s ../P2PTest/app/src app/src

Some other resources that I’ve looked at suggest doing the same thing for the the project’s build.gradle and the build.gradle for each module. For simple projects, the only module is the app. I tried this and while it worked fine for the project’s build.gradle, I would always get errors about a broken symbolic link when I tried with the build.gradle at the module level. In the end, I only did this for the project level.

## ln -s ../P2PTest/app/build.gradle app/build.gradle ## this line had failed results
ln -s ../P2PTest/build.gradle build.gradle

Because I could not make changes to the module’s build.gradle, if there are changes made to it then it will need to be copied between the instances of the project. Thankfully, most changes to a project will be in the source files. While it is possible to edit the source files from eithe project, I encourage only editing from the primary project. This will help avoid situations where you have unsaved changes to the same file in different editors and have to manually merge them.

When you are ready to debug, you can set one instance of Android Studio to one of your phones, and the other instance to the other phone. Here, I have two instances set to deploy to a Galaxy Note 5 and a Galaxy Note 8.

Using My Phone as a Web Server – Introduction

I’m back from a recent trip out of the country. While the facility I was staying at would have an Internet connection, the price for Internet access for about a week was a little over 100 USD. I’d rather go without a connection. While I didn’t have access to the Internet, I did have access to a local network. I considered options on how to bring media with me. Rather than bring a few movies and songs here and there I wanted to indiscriminately copy what I could to a drive. In addition to myself, there were three other people with me that might also want to view the media. It made since to host the media on a pocket-sized web server. I setup a Raspberry Pi to do just this and took it with me on the trip.

After the trip was over, I thought to myself that there should be a way to do the same thing in a more compact package. I started to look at what the smallest Raspberry Pi Computer Module based setup would look like, and as I mentally constructed a solution in my mind, I realized it was converging to the same form factor as a phone. I’ve got plenty of old phones lying about. While I wouldn’t suggest this as a general solution (phones are a lot more expensive than a Pi) it is what I decided to have fun with.

Extra, unused Android devices.

There are various ways to run NodeJS on a phone and some other apps in the app store that let you host a web server on your phone. I didn’t use any of these. I am reinventing the wheel simply because I find enjoyment in creating. It was a Sunday night, I was watching my TV lineup, and decided to make a simple proof of concept. I only wanted the PoC to listen for incoming request and send a hard coded HTML page back to the client. I had that working in no time! I’ll build upon this to give it the ability to host static files and media files when I do a future update on this. I’m taking a moment to talk about how I build this first.

I created a new Android project. Before writing code, I declared a few permissions. I like to do this first so that later on I don’t have to wonder why a specific call failed. The permissions I added are for Internet access, accessing the WiFi state, and access to the Wake Lock to keep the device from completely suspending. For what I show here, only Internet capabilities are going to be used. You can choose to omit the other two permissions for this version of the program.

With the permissions in place, I started writing the code. There are only three classes used in the web server (counting an interface as a class).

  • WebServer – Listens for incoming request and passes them off to be handled as they com in
  • ClientSocketHandler – Processes an incoming class and gives the response
  • IStatusUpdater – used for passing status information back to the UI

The WebServer class accepts in its constructor the port on which it should run and a Context object, which is needed for some other calls. A WebServer instance does not begin to listen for connections until the start() method is called. Once it is started, the device retrieves the address of the WiFi adapter and creates a socket that is bound to this address. A status message is also sent to the UI so that it can show the device’s IP address. The class then creates the thread that will listen for incoming connections.

In the function listenerThread(), the class waits for an incoming connection. As soon as it receives one, it creates a new ClientSocketHandler with the socket and lets the ClientSocketHandler process the request and immediately goes back to listening for another connection. It doesn’t wait for the ClientSocketHandler to finish before waiting for the next connection.

package net.j2i.webserver.Service
import android.content.Context
import android.net.wifi.WifiManager
import android.text.format.Formatter
import java.net.InetSocketAddress
import java.net.ServerSocket
import java.net.Socket
import kotlin.concurrent.thread
class WebServer {
    companion object {
    }
    val port:Int;
    lateinit var receiveThread:Thread
    lateinit var  listenerSocket:ServerSocket;
    var keepRunning = true;
    val context:Context;
    var statusReceiver:IStatusUpdateReceiver
    constructor(port:Int, context: Context) {
        this.port = port;
        this.context = context;
        this.statusReceiver = object : IStatusUpdateReceiver {
            override fun updateStatus(ipAddress: String, clientCount: Int) {
                fun updateStatus(ipAddress: String, clientCount: Int) {
                }
            }
        }
    }
    fun start() {
        keepRunning = true;
        val wifiManager:WifiManager =
            this.context.getSystemService(Context.WIFI_SERVICE) as WifiManager;
        val wifiIpAddress:String = Formatter.formatIpAddress(wifiManager.connectionInfo.ipAddress);
        this.statusReceiver.updateStatus(wifiIpAddress, 0)
        this.listenerSocket = ServerSocket();
        this.listenerSocket.reuseAddress = true;
        this.listenerSocket.bind(InetSocketAddress(wifiIpAddress, this.port))
        this.receiveThread = thread(start = true) {
                this.listenerThread()
        }
        //this.receiveThread.start()
    }
    fun listenerThread() {
        while(keepRunning) {
            var clientSocket: Socket = this.listenerSocket.accept()
            val clientSocketHandler = ClientSocketHandler(clientSocket)
            clientSocketHandler.respondAsync()
        }
    }
}

In ClientSocketHandler, the class grabs the input stream (to read the request from the remote client) and the OutputStream (to send data back to the client). Now I haven’t implemented the HTTP protocol. But in HTTP, the client will send a one or more lines that make up the request followed by a blank line. For now, my client handler reads from input stream until that blank line is encountered. Once received, it composes a response.

I’ve got the HTML string that the client is going to return hardcoded into the application. In the response string is converted to a byte array. The size of this array is needed for one of the response headers. The client will receive the size of the response in the header Content-Length. The header for the response is constructed as a string and converted to a byte array. Then the two arrays are sent back to the client (first the header, then the content). After the response is sent, the client has done its work.

package net.j2i.webserver.Service
import android.util.Log
import java.lang.StringBuilder
import java.net.Socket
import kotlin.concurrent.thread
class ClientSocketHandler {
    companion object {
        val TAG = "ClientSocketHandler"
    }
    private val clientSocket: Socket;
    private val responseThread:Thread
    constructor(sourceClientSocket:Socket) {
        this.clientSocket = sourceClientSocket;
        this.responseThread = thread( start = false) {
                this.respond()
        }
    }
    public fun respondAsync() {
        this.responseThread.run()
    }
    private fun respond() {
        val inputStream = this.clientSocket.getInputStream()
        val outputStream = this.clientSocket.getOutputStream()
        var requestReceived = false;
        while(inputStream.available()>0 &&  !requestReceived) {
            val requestLine = inputStream.bufferedReader().readLine()
            Log.i(ClientSocketHandler.TAG, requestLine)
            if(processRequestLine(requestLine)) {
            requestReceived = true;}
        }
        val sb:StringBuilder = StringBuilder()
        val sbHeader = StringBuilder()
        sb.appendLine(
            "<html>"+
                    "<head><title>Test</title></head>" +
                    "<body>Test Response;lkj;ljkojiojioijoij</body>"+
                   "</html>")
        sb.appendLine()
        val responseString = sb.toString()
        val responseBytes = responseString.toByteArray(Charsets.UTF_8)
        val responseSize = responseBytes.size
        sbHeader.appendLine("HTTP/1.1 200 OK");
        sbHeader.appendLine("Content-Type: text/html");
        sbHeader.append("Content-Length: ")
        sbHeader.appendLine(responseSize)
        sbHeader.appendLine()
        val responseHeaderString = sbHeader.toString()
        val responseHeaderBytes = responseHeaderString.toByteArray(Charsets.UTF_8)
        outputStream.write(responseHeaderBytes)
        outputStream.write(responseBytes)
        outputStream.flush()
        outputStream.close()
    }
    fun processRequestLine(requestLine:String): Boolean {
        if(requestLine == "") {
            return true;
        }
        return false;
    }
}

The interface that I mentioned, IStatusUpdateReceiver, is currently only being used to communicate the IP address on which the server is listening back to the UI.

package net.j2i.webserver.Service
interface IStatusUpdateReceiver {
    fun updateStatus(ipAddress:String, clientCount:Int);
}

Since the server runs on a different thread, before updating the UI I must make sure that UI related calls are being performed on the main thread. If you look in the class for MainActivity you will see that I created the  WebServer instance in the activity. I’m only doing this because it is a PoF. If you make your own application, implement this as a service.  I set the statusReceiver member of the WebServer to an annonymous class instance that does nothing more than update the IP address displayed in the UI. The call to set the text in the UI is wrapped in a runOnUiThread block. After this is setup, I call start() on the webserver to get things going.

package net.j2i.webserver
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.widget.TextView
import net.j2i.webserver.Service.IStatusUpdateReceiver
import net.j2i.webserver.Service.WebServer
class MainActivity : AppCompatActivity() {
    lateinit var webServer:WebServer
    lateinit var txtIpAddress:TextView
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        this.txtIpAddress = findViewById(R.id.txtIpAddress)
        this.webServer = WebServer(8888, this)
        this.webServer.statusReceiver = object:IStatusUpdateReceiver {
            override fun updateStatus(ipAddress:String, clientCount:Int) {
                runOnUiThread {
                    txtIpAddress.text = ipAddress
                }
            }
        }
        this.webServer.start()
    }
}

I was happy that my proof of concept worked. I haven’t yet decided if I am going to throw this away or continue working from this. In either case, there are a few things that I want to have in whatever my next version is. I do absolutely no exception handling or cleanup in this code. It needs to be able to timeout a connection and refuse connections if it gets inundated. I also want my next version to do actual processing of the incoming HTTP request and serve up content that has been saved to the device’s memory, such as a folder on the device’s memory card. While I am making this to serve up static content, I might add a few services to the server/device side, such as a socket server. That will require a lot more thought.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

The James Webb Telescope Has Launched!

On 25 December 2021, the James Webb Telescope (JWT) launched. This last month up to the launch had a couple of delays due to weather and an incident for which they had to ensure there was no damage. At the time that I am writing this, the JWT has not yet been brought up to full operation. But thus far, things have been going well. The JWT is often thought of as the successor to the Hubble telescope. Some call it a replacement, but its capabilities are not identical to that of Hubble. It was designed based on some of the findings of Hubble. I’ve got some readers whose living memory does not go back as far as the Hubble telescope. Let’s take a brief walk-through history.

Rendering of the James Webb Telescope

Edwin Hubble (the person, not the telescope) is most well-known for his astronomical observations and discoveries. Some of his discovers included that there were galaxies beyond the Milky Way, found methods to gauge cosmic distances, and discovered that the further aware from earth that an observed galaxy is, the faster that it is moving away from other galaxies (this is known as “Hubble’s Law”). Edwin Hubble performed many of his observations using what was then the world’s largest telescope, named after James D. Hooker. Naming large telescopes after people was a bit off a tradition.

The Hubble Telescope

Space telescopes were proposed in the early 1920s. As is the case with many high investment scientific endeavors, Hubble’s planning was a joint venture that crossed international borders. The USA’s NASA and the European Space Agency both made contributions to Hubble. The project was started in the 1970s with plans to launch in 1983. There were delays that prevented this. But it finally launched in 1990. Much to the disappointment of many, after launch it was discovered that the Hubble’s main mirror was incorrectly manufactured; the telescope was taking distorted images. It was possible to use software to make some corrections in the image, but servicing was needed to correct the problem. Hubble, being positioned in low earth orbit, was accessible to astronauts by way of the space shuttle. A few years after its launch in 1993 a servicing mission corrected the optical problems. Through several other missions Hubble was maintained and upgraded until 2009. The telescope had been used for over 30 years. The telescope is still partially operational now. Some of the gyroscopes have failed as has one of the high-resolution cameras. But some other cameras and instruments are still operational. A near-Infared telescope is functional but remains offline for the time being. It is expected to be able to maintain functionality until 2040.

The Plane Uranus as seen in Near Infared

While Hubble was operating in its earlier years, plans for its successor had begone. Planning for the James Web Telescope began about 1996. The year prior, in 1995, was the Hubble Deep Field photograph. The Hubble telescope was aimed at a dark patch of sky and took a long exposure photograph. For 10 days the telescope collected whatever bits of light that it could. The result was an image that was full of galaxies! Around 10,000 galaxies were observed through the deep field imaging. Visible, infrared, and ultraviolet wavelengths were used in the imaging.

Hubble Deep Field Image

Earlier I mentioned Edwin Hubble’s discovery of how galaxies further aware are recessing from earth at a faster rate than ones that are closer. The faster the galaxy is moving away, the more red-shifted the light from it is. Red shifting is a form of the doppler effect observed on light. Just as the pitch of a sound will be higher in pitch if it is moving toward and observer and lower in pitch when it is moving away, visible light shifts to become red if the source is moving away from an observer and blue if it is moving closer. Part of the purpose of the JWT is to make observations of astronomical bodies much more distant than the Hubble could. Since these bodies will be more red shifted, the JWT was designed to be sensitive to light that is red shifted. While both the Hubble Telescope and JWT have infrared capabilities, the JWT is designed to see light that is much more red. Because of this goal, the JWT has some rather unusual elements of design and constraints.

Objects radiate their heat out as electromagnetic waves. For objects that are hot enough, we are able to see this radiation as light; a hot piece of metal may glow red or orange. Objects with no glow in visible light may still give off light in the Infared spectrum. Such objects include the earth and the moon, which reflect infrared from the sun and emits heat.

Infrared Photo showing heat leakage from a house

The Hubble was positioned in low earth orbit, about 570km above earth. The moon is about 385,000 km from earth. To avoid the glow of the earth and moon, the JWT is much further aware at 1,500,000 km. The Hubble was in orbit around the earth, but the JWT isn’t really in orbit. It is in a Lagrange point. Objects positioned in a Lagrange point tend to stay in position with very little active adjustments needed.

HST, Webb distance 
  graphic
Relative distances from earth. Image from NASA.gov.

The telescope is still exposed to the sun, which would potentially heat the telescope up and cause the telescope to have its own glow that would interfere with imaging. To prevent the sun from being a problem, the telescope has a multilayered shield on the portion that is facing the sun. The shield is designed to reflect light away and to dissipate heat before it reaches the imaging elements of the telescope. Another unique element of the telescope is the exposed reflector. The reflector is composed of several hexagon-shaped mirrors coated in gold. Gold reflects infrared light very well. Using hexagon segments for the mirror simplifies manufacturing and allows the elements to be more easily folded; the telescope was launched in a fairing with the mirror folded and the sunshield sandwiched over the mirror.

Folded James Webb Telescope.

The JWT’s field of vision is much wider than that of Hubble. It collects about 15 times more light than the Hubble and has a wider field of view. The telescope’s look stands out in that there is no tube wrapped around the optical elements. Optical tubes on terrestrial telescopes protect the elements from debris and stray light. Because of the telescope’s sun shield and its position, it won’t be exposed to stray light from the sun. I’ve not been able to find references on any concern for the mirror being exposed to debris in space (despite being a hard vacuum, it isn’t without debris) but unlike on earth, there are not concerns with it collecting dust. With these differences in design and capabilities and design, what are the plans on how this telescope will be used?

Webb and Hubble mirror 
comparison
Comparison of Hubble and JWT mirror size, from NASA.gov

While I’m not a fan of this description, I often see its purposed summarized as “looking back in time.” Despite my dislike of this description, it isn’t inaccurate. Light takes time to travel. If you look toward the moon, the light reflected from the moon took 3 seconds to travel to your eyes. You are seeing how the moon looked three seconds ago. For the sun, it’s eight minutes ago. These bodies to change dramatically enough for the delay to make a significant difference. But as we look at bodies that are further away, the time it takes to travel becomes more significant. From Mars to earth is about 22 minutes. Jupiter to earth is about 48 minutes. It takes a few hours for light to travel between Pluto and earth. For other galaxies, light takes years. While light-years is a unit of distance, it also tells you how long it takes for light to travel from a body. The JWT’s light collection capabilities make it capable of seeing light far enough aware to collect information on the earlier universe. The Hubble telescope was able to collect information on the universe from about ~13.4 billion years ago while the James Webb Telescope is expected to collect data from about 13.7 billion years ago. That 300,000,000 difference

As of yet, the James Webb Telescope hasn’t taken its first image. This is about 4 days after launch. It has deployed the sun shield. It will take about another 25 days for the telescope to reach its intended position. Before then, the mirror segments must be unfolded into place. If you are waiting to see images from the JWT, it will be a while. There’s calibration and preparation needed. Other than test images, we might not start seeing full images for another six months.

If you want to keep track of where the telescope is and its status, NASA has a site available showing the tracking data.

James Webb Telescope Tracking Site

Developments on the James Webb Telescope will be slow to come at first, but it should be interesting.

Audio version of this podcast.

Alternate File Streams::Security Concerns?

I previously wrote about alternate data streams. Consider this an addendum to that post.

Alternate file streams are a Windows file system feature that allows additional sets of data to be attached to a file. Each data stream can be independently edited. But they are all part of the same file. Since the Windows UI doesn’t show information on these files, it raises a few perceivable security concerns. Whether it is the intention or not, information within an alternate file stream is concealed from all accept those that know to look for it. This isn’t limited to humans looking at the file system, but also other security products that may scan a file system.

It is possible to put executable content within an alternate file stream. Such executable content can’t be invoked directly from the UI, but it can be invoked through tools such as WMI. Given these security concerns that alternate streams may raise, why do I still use them? Those concerns are only applicable to how other untrusted entities may use the feature. But any action of an untrusted entity may be one of concerns.

I thought this concern was worth mentioning because if you try searching for more information on alternate file streams, these concerns are likely to come up on the first page of results.

Working With Alternative Data Streams::The “Hidden” Part of Your Windows File System on Windows

In the interest of keeping a cleaner file system, I sometimes try to minimize the number of files that I need to keep data organized. A common scenario where this goal is expressed is when writing an application that must sync with some other data source such as a content management system. Having a copy of the files from another system isn’t always sufficient. Sometimes additional data is needed for keeping the computers in a solution in sync. For a given file, I may need to also track an etag, CRC, or information on the purpose of a file. There are some common solutions for organizing this data. One is to have one additional file that contains all of the additional meta data for the files being synced. Another is to make an additional data file for each content file that contains this information. The solution that I prefer isn’t quite either of these. I prefer to have the data within an “alternate stream” of the same file. If the file get’s moved elsewhere on the filesystem, the additional data will move with it.

This is very much a Windows-Only solution. This will only work on the NTFS file system. If you attempt to access an alternative stream on a FAT32 file system, it will fail since that file system does not support them.

I most recently used this system of organization when I inherited a project that was, in my opinion, built with the wrong technology. In all fairness, many features of the application in question were implemented through scope creep. I reimplemented the application in about a couple of days using .Net technologies (this was much easier for me to do since, unlike the original developers, I had the benefit of having a complete requirements). There were a lot of aspects of that project that will be expressed in posts in the coming weeks.

The reason that I call this feature “hidden” is because the Windows UI does not give any visual indicator that a file has an additional data stream. There is no special icon. If you check on the size of a file, it will only show you the size of the main data stream. (In theory, one could add 3 gigs of data to the secondary stream of a 5 byte file and the OS would only report the file as being 5 bytes in size).

I’ll demonstrate accessing this stream from within the .Net Framework. There’s not built-in support for the alternative datastreams, but there is built in support for Windows file handles. Using a P/Invoke you can get a Windows file handle and then pass it to a .Net FileStream object to be used with all the other .Net features.

Every file has a default data stream. This is the stream you would see as being the normal file. It contains the file data that you are usually working with. With our normal concept of files and directories, files contain data and directories contain files, but no data directly. A file can contain any number of alternative data streams. Each one of these streams has a name of your choosing. Directories can have alternative data streams too!

To start experimenting with streams, you only need the command prompt. Open the command prompt and navigate to a directory in which you will place your experimental streams. type the following.

echo This is data for my alternative stream > readme.txt:stream

If you get a directory listing, we find that the files is listed as zero bytes in size.

c:\temp\streams>echo This is data for my alternative stream > readme.txt:stream

c:\temp\streams>dir
 Volume in drive C has no label.
 Volume Serial Number is 46FF-0556

 Directory of c:\temp\streams

10/11/2021  11:22 AM    <DIR>          .
10/11/2021  11:22 AM    <DIR>          ..
10/11/2021  11:22 AM                 0 readme.txt
               1 File(s)              0 bytes
               2 Dir(s)  108,162,564,096 bytes free

c:\temp\streams>

Is the data really there? From the command line, we can view the data using the more command (the type command doesn’t accept the syntax needed to refer to the stream).

c:\temp\streams>more < readme.txt:stream
This is data for my alternative stream

Windows uses alternative data streams for various system purposes. There are a number of names that you may encounter in files that Windows manages. This is a list of some well known stream names.

  • $DATA – This is the default stream. This stream contains the main (regular) data for a file. If you open README.TXT, this has the same effect as opening README.TXT:$DATA.
  • $BITMAP – data used for managing a b-tree for a directory. This is present on every directory.
  • $ATTRIBUTE_LIST – A list of attributes for a file.
  • $FILE_NAME – Name of file in unicode characters, including short name and hard links
  • $INDEX_ALLOCATION – used for managing large directories

There are some other names. In general, with the exception of $DATA, I would suggest not altering these streams.

Windows does give you the ability list alternative streams through PowerShell. We will look at that in a moment. For now, let’s say you had to make your own tool for managing such resources. The utility of this example is it is giving us an opportunity to see how we might work with these resources in code. One of the first tools that I think is useful would be a command line tool that would transfer data from one stream to another. With this tool, I can read from a stream and either write it to a console or to another stream. The only thing that affects where it is written is the file name. It only took a few minutes to write such a tool using C++. It is small enough to put the entirety of the code here.

#include <Windows.h>
#include <iostream>
#include <string>
#include <vector>
#include <list>

using namespace std;

const int BUFFER_SIZE = 1048576;

int main(int argc, CHAR ** argv)
{
    wstring InputStreamName = L"";
    wstring OutputStreamName = L"con:";
    wstring InputPrefix = L"--i=";
    wstring OutputPrefix = L"--o=";

    wstring Instructions =
        L"To use this tool, provide an input and an output file for it. The syntax looks like the following.\r\n\r\n"
        L"StreamStreamer --i=inputFileName --o=OutputFileName.ext::streamName\r\n\r\n";

    HANDLE hInputFile = INVALID_HANDLE_VALUE;
    HANDLE hOutputFile = INVALID_HANDLE_VALUE;

    vector<wstring> arguments(argc);

    for (auto i = 0; i < argc; ++i)
    {
        auto arg = string(argv[i]);
        arguments[i] = (wstring(arg.begin(), arg.end()));
    }

    for (int i = 0; i < argc; ++i)
    {
        if (!arguments[i].compare(0, InputPrefix.size(), InputPrefix))
            InputStreamName = arguments[i].substr(InputPrefix.size());
        if (!arguments[i].compare(0, OutputPrefix.size(), OutputPrefix))
            OutputStreamName = arguments[i].substr(OutputPrefix.size());
    }

    if ((!InputStreamName.size()) || (!OutputStreamName.size()))
    {
        wcout << Instructions;
        return 0;
    }

    hInputFile = CreateFile(InputStreamName.c_str(), GENERIC_READ, FILE_SHARE_READ, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
    if (hInputFile != INVALID_HANDLE_VALUE)
    {
        hOutputFile = CreateFile(OutputStreamName.c_str(), GENERIC_WRITE, FILE_SHARE_READ, 0, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
        if (hOutputFile != INVALID_HANDLE_VALUE)
        {
            vector<char> buffer = vector<char>(BUFFER_SIZE);
            DWORD bytes_read = 0;
            DWORD bytes_written = 0;
            do {
                bytes_read = 0;
                if (ReadFile(hInputFile, &buffer[0], BUFFER_SIZE, &bytes_read, 0))
                    WriteFile(hOutputFile, &buffer[0], bytes_read, &bytes_written, 0);
            } while (bytes_read > 0);
            CloseHandle(hOutputFile);
        }
        CloseHandle(hInputFile);
    }
}


Usage is simple. The tool takes two arguments; an input stream name and an output stream name are passed prefixed with either --i= or --o=. If no output name is specified, it defaults to an output name of con:. This name, con:, refers to a console. That had been a reserved file name for the console. I have the vague idea there may be some other console name, but could not find it. con: is a carry-over of the DOS days of 30+ years ago. It worked then, and it still works now. So I’m sticking with it. Note that there is

After compiling this, I can use it to retrieve the text that I attached to the stream earlier.

c:\temp\streams>StreamStreamer.exe --i=readme.txt:Stream
This is data for my alternative stream

c:\temp\streams>

I can also use it to take the contents of some other arbitrary file and attach it to an existing file in an alternative stream. In testing, I took a JPG I had of the moon and attached it to a file. Then I extracted it from that alternative stream and wrote it to a different regular file just to ensure that I had an unaltered data stream.

c:\temp\streams>StreamStreamer.exe --i=Moon.JPG --o=readme.txt:moon

c:\temp\streams>StreamStreamer.exe --i=readme.txt:moon --o=m.jpg

c:\temp\streams>dir *.jpg
 Volume in drive C has no label.
 Volume Serial Number is 46FF-0556

 Directory of c:\temp\streams


10/12/2021  02:54 PM         3,907,101 m.jpg
12/21/2020  07:46 PM         3,907,101 Moon.JPG
               3 File(s)      7,814,202 bytes
               0 Dir(s)  105,063,383,040 bytes free

c:\temp\streams>

You will probably want the ability to see what streams are inside of a file. You could download the Streams tool from System Internals, or you could use PowerShell. PowerShell has built in support for streams. I’ll be using that throughout out the rest of this writeup. To view streams with PowerShell, use the Get-Item command with the -stream * parameter.

PS C:\temp\streams> Get-Item .\readme.txt -stream *


PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt::$DATA
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt::$DATA
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\streams\readme.txt
Stream        : :$DATA
Length        : 15

PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt:moon.jpg
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt:moon.jpg
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\streams\readme.txt
Stream        : moon.jpg
Length        : 3907101

PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt:stream
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt:stream
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\s\readme.txt
Stream        : stream
Length        : 43



PS C:\temp\streams>

If you are making an application that uses alternative streams, you will want to know how to list the streams from within it also. That is also easy to do. Since the much beloved Windows Vista we’ve had a Win32 API for enumerating streams. The functions FindFirstStreamW/FindFirstStreamTransactedW and FindNextStreamW will do this for you. Take note that there only exist Unicode versions of these functions. ASCII variations are non-existent. If you have ever used FindFirstFile or FindNextStreamW the usage is similar.

Two variables are needed to search for streams. One variable is a HANDLE that is used as an identifier for the resources and state of the search request. The other is a WIN32_FIND_STREAM_DATA structure into which data on streams that were found are put. FindFirstStreamW will return a handle and populate a WIN32_FIND_STREAM_DATA with the first stream it finds. From there, each time FindNextStreamW is called with the HANDLE that had been returned earlier, it will populate a WIN32_FIND_STREAM_DATA with the information on the next stream. When no more streams are found, FindNextStreamW will have a return value of ERROR_HANDLE_EOF.

#include <Windows.h>
#include <iostream>
#include <vector>
#include <string>

using namespace std;

int main(int argc, char**argv)
{
    WIN32_FIND_STREAM_DATA fsd;
    HANDLE hFind = NULL;
    vector<wstring> arguments(argc);

    for (auto i = 0; i < argc; ++i)
    {
        auto arg = string(argv[i]);
        arguments[i] = (wstring(arg.begin(), arg.end()));
    }

    if (arguments.size() < 2)
        return 0;
    wstring fileName = arguments[1];

    try {
        hFind = FindFirstStreamW(fileName.c_str(), FindStreamInfoStandard, &fsd, 0);
        if (hFind == INVALID_HANDLE_VALUE) throw ::GetLastError();
        const int BUFFER_SIZE = 8192;
        WCHAR buffer[BUFFER_SIZE] = { 0 };
        WCHAR fileNameBuffer[BUFFER_SIZE] = { 0 };

        wcout << L"The following streams were found in the file " << fileName << endl;
        for (;;)
        {
            swprintf(fileNameBuffer, BUFFER_SIZE, L"%s%s", fileName.c_str(), fsd.cStreamName);
            swprintf_s(buffer,BUFFER_SIZE, L"%-50s %d", fileNameBuffer, fsd.StreamSize);
            wstring formattedDescription = wstring(buffer);
            wcout << formattedDescription << endl;

            if (!::FindNextStreamW(hFind, &fsd))
            {
                DWORD dr = ::GetLastError();
                if (dr != ERROR_HANDLE_EOF) throw dr;
                break;
            }
        }
    }
    catch (DWORD err)
    {
        wcout << "Oops, Error happened. Windows error number " << err;
    }
    if (hFind != NULL)
        FindClose(hFind);
}

For my actual application purposes, I don’t need to query the streams in a file. The streams of interest to me will have a predetermined name. Instead of querying for them, I attempt to open the stream. If it isn’t there, I will get a return error code indicating that the file isn’t there. Otherwise I will have a file HANDLE for reading and writing. With what I’ve written so far, you could begin using this feature in C/C++ immediately. But my target is the .Net Framework. How do we use this information there?

In Win32, you can read or write these alternative data streams as you would any other file by using the correct stream name. If you try that within the .Net Framework, it won’t work. Before even hitting the Win32 APIs, the .Net Framework will treat the stream name as an invalid file name. To work around this, you’ll need to P/Invoke the Win32 API for opening files. Thankfully, once you have a file handle, the .Net Framework will work with that file handle just fine and allow you to use all the methods that you would with any other stream.

Before adding the P/Invoke that are needed to use this functionality in .Net, let’s defined a few numerical constants.

    public partial class NativeConstants
    {
        public const uint GENERIC_WRITE = 1073741824;
        public const uint GENERIC_READ = 0x80000000;
        public const int FILE_SHARE_DELETE = 4;
        public const int FILE_SHARE_WRITE = 2;
        public const int FILE_SHARE_READ = 1;
        public const int OPEN_ALWAYS = 4;
    }

These may look familiar. These constants have the same names as constants that were used in C when calling the Win32 API. These constants, as their names suggest, are used to indicate the mode in which files should be opened. Now fot the P/Invokes to the calls to open the files.

    public partial class NativeMethods
    {
        [DllImportAttribute("kernel32.dll", EntryPoint = "CreateFileW")]
        public static extern System.IntPtr CreateFileW(
            [InAttribute()][MarshalAsAttribute(UnmanagedType.LPWStr)] string lpFileName,
            uint dwDesiredAccess,
            uint dwShareMode,
            [InAttribute()] System.IntPtr lpSecurityAttributes,
            uint dwCreationDisposition,
            uint dwFlagsAndAttributes,
            [InAttribute()] System.IntPtr hTemplateFile
        );

    }

That’s it! That is the only P/Invoke that is needed.

The data that I was writing to these files was metadata on on files for matching them up with entries in a CMS. This includes information like the last date that it was updated on the CMS, a CRC or ETAG for knowing if the version on the local computer is the same as the one on the CMS, and a title for presenting to the user (Which may be different than the file name itself). I’ve decided that the name of the stream in which I am placing this data to simply be meta. I’m using JSON for the data encoding. For your purposes, you could use any format that fits your application. Let’s open a stream for writing.

I’ll use the Win32 CreateFileW function to get a file handle. That handle is passed to the .Net FileStream constructor. From there, there is no difference in how I would read or write from this

var filePath = Path.Combine(ds.DownloadCachePath, $"{fe.ID}{extension}");
FileInfo fi = new FileInfo(filePath);
var fullPath = fi.FullName;
if (fi.Exists)
{
    var metaStream = NativeMethods.CreateFileW(
        $"{fullPath}:meta",
        NativeConstants.GENERIC_READ,
        NativeConstants.FILE_SHARE_READ,
        IntPtr.Zero,
        NativeConstants.OPEN_ALWAYS,
        0,
        IntPtr.Zero);
    using (StreamReader sr = new StreamReader(new FileStream(metaStream, FileAccess.Read)))
    {
        try
        {
            var metaData = sr.ReadToEnd();
            if (!String.IsNullOrEmpty(metaData))
            {
                var data = JsonConvert.DeserializeObject<FileEntry>(metaData);
                fe.LastModified = data.LastModified;
            }
        } catch(IOException exc)
        {
        }
    }
}

I said earlier that this is a Windows-Only solution and that it doesn’t work on the Fat32 file system. The two implications of this is that if you are using this in in a .Net environment that is running on another operating system this won’t work. It will likely fail since the P/Invokes won’t be able to bind. The other potential problem demands an active check within the code. If a a program using alternative file streams is given a FAT32 file system to work with, it should detect that it is on the wrong type of file system before trying to perform actions that will fail. Detecting the file system type only requires a few lines of code. In .Net, the following code will take the path of the currently running assembly, see what drive it is on, and retrieve the file system type.

 String assemblyPath = typeof(FileSystemDetector).Assembly.Location;
 String driveLetter = assemblyPath[0].ToString();
 DriveInfo driveInfo = new DriveInfo(driveLetter);
 string fsType = driveInfo.DriveFormat;
 return fsType;

If this code is run on a a drive using the NTFS file system, the return type will be the string value NTFS. If it is anything else, know that attempts to access alternative streams will fail. If you try to copy these file to a FAT32 drive, Windows will warn you of a loss of data. Only the default streams will be copied to the FAT32 drive.

In the next posts on this I will demonstrate a practical use. I’ll also talk about what some might see as a security concern with alternate file streams.

Using an iPhone as a Beacon

As with any project, I had a list of milestones and dates on which I expected to hit them leading up to project completion. One of the elements for the project was an application that needed to detect its proximity to other devices to select a device for interaction. I planned to use iBeacons for this, and had a delivery date on some beacons for development. The delivery date came, a box with the matching tracking number came, but there were no iBeacons inside. Instead, there was a phone case. This isn’t the first time I have ordered one item and Amazon has sent me another. I went online and filled out a form to have the order corrected. They stated I would have the item in another 5 days. In the mean time, I didn’t want to let progress slip. I’ve heard several times “You can use an iPhone as an iBeacon.” I now had motivation to look into this. You can in fact use a phone as an iBeacon. But you have to write an application yourself to use it this way.

When I took a quick look in the App store, I couldn’t find an app for this purpose. So I decided to make an application myself. It isn’t hard. In my case, I’m emulating an iBeacon as a stand-in for actual hardware. But there are other reasons you might want to do this. For example, if I were using an iPad as a display showing more information on an exhibit users browsing the exhibit could interact with the content on the display using their own phone. The iBeacon signal could be used so that the user’s phone knows which display it is close to, allowing them to trigger interactions from their own phone (a valuable method of interaction given the higher concerns on hygiene and concerns over shared touch surfaces).

Beacons are uniquely identified by three pieces of data; the UUID, Major number, and Minor number. A UUID, or Universally Unique ID, is usually shared among a group of iBeacons that are associated with the same entity. The usage of the Major and Minor numbers is up to the entity. Usually the Major will be used to group related iBeacons together with the Minor number being used as a unique ID within the the set. I’ll talk more about these numbers in another post.

For my iPhone application, I have created a few variables to hold the simulated Beacon’s identifiers. I also have a variable to track whether the iBeacon is active, and have defined a Zero UUID to represent a UUID that has not been assigned a value.

class BeaconManager {
    
    var objectWillChange = PassthroughSubject<Void, Never>()
    
    let ZeroUUID = UUID.init(uuidString: "00000000-0000-0000-0000-000000000000")
    
    var BeaconUUID = UUID(uuidString: "00000000-0000-0000-0000-000000000000") {
        didSet { updateUI() }
    }
    
    var Major:UInt16 = 100 {
        didSet { updateUI() }
    }
    
    var Minor:UInt16 = 2 {
        didSet { updateUI() }
    }
    
    var IsActive:Bool = false {
        didSet { updateUI() }
    }
}

I am going to use Swift UI for displaying information. That is why setting these variables also triggers a call to updateUI(). There are some callbacks that are made by Apple’s iBeacon API. For these, I’ll need to also need to implement the CBPeripheralManagerDelegate. This protocol is defined in CoreBluetooth. We also need permission for the device to advertise its presence over Bluetooth. permission. Bluetooth is often used for indoor location (which will be my ultimate intention). Let’s get all these other things in place. The necessary import statements and inheritance will look like the following.

import Foundation
import CoreLocation
import CoreBluetooth
import Combine

class BeaconManager: NSObject, CBPeripheralManagerDelegate, Identifiable, ObservableObject {
   ...
}

For the Bluetooth permission that the application needs, a new String value must be added to the Info.plist. The item’s key is NSBluetoothAlwaysUsageDescription. The value should be a text description that will be presented to the user letting them know why the application is requesting Bluetooth permissions.

I want the simulated iBeacon to have the same value every time the application runs. At runtime, the application is going to check whether there is a UUID already saved in the settings. If there is not one, then it will generate a new UUID and save it to the settings. From then on, it will always use the same ID. I do the same thing with the Major and Minor numbers using the UInt16.random(in:) function. This information together is used for create a CLBeaconRegion.

    func createBeaconRegion() -> CLBeaconRegion {
        let settings = UserDefaults.standard
        if let savedUUID = settings.string(forKey: BEACON_UUID_KEY) {
            if let tempBeaconUUID = UUID(uuidString: savedUUID) {
                BeaconUUID = tempBeaconUUID
            }
        }
        if(BeaconUUID == nil){
            BeaconUUID = UUID()
            settings.setValue(BeaconUUID!.uuidString, forKey: BEACON_UUID_KEY)
            settings.synchronize()
        }   
        let majorValue = settings.integer(forKey: BEACON_MAJOR_KEY) ?? 0
        if(majorValue == 0) {
            Major = UInt16.random(in: 1...65535)
            settings.setValue(Major, forKey: BEACON_MAJOR_KEY)
        }   
        let minorValue = settings.integer(forKey: BEACON_MINOR_KEY) ?? 0
        if(minorValue == 0) {
            Minor = UInt16.random(in: 1...65535)
            settings.setValue(Minor, forKey: BEACON_MINOR_KEY)
        }   
        print(BeaconUUID?.uuidString)
        let major:CLBeaconMajorValue = Major
        let minor:CLBeaconMinorValue = Minor
        let beaconID =  "net.domain.application"
        return CLBeaconRegion(proximityUUID:BeaconUUID!, major: major, minor: minor, identifier: beaconID)
    }

When I first tried to use the CLBeaconRegion it failed, and I was confused. After a bit more reading, I found out why. The Bluetooth radio can take a moment to initialize into the mode that the code needs it for. Trying to use it too soon can result in failure. To fix this, wait for a callback to CBPeripheralManagerDelegate::peripheralManagerDidUpdateState(_ peripheral:CBPeripheralManager).In the handler for this callback, check if the .state of the peripheral variable .poweredOn. If it is, then we can start using our CLBeaconRegion. We can call startAdvertising on the CBPeripheralManager object to make the iBeacon visible. When we want the phone to no longer act as an iBeacon, we can call the stopAdvertising. Note that the device will only continue to transmit while the application has focus. If the application gets pushed to the background, the phone sill stop presenting as an iBeacon.

    func peripheralManagerDidUpdateState(_ peripheral: CBPeripheralManager) {
        if(peripheral.state == .poweredOn) {
            let  beaconRegion = createBeaconRegion()
            let peripheralData = beaconRegion.peripheralData(withMeasuredPower: nil)
            peripheral.startAdvertising(((peripheralData as NSDictionary)as! [String:Any]))
            IsActive = true
        }
    }

    func start() {
        if(!IsActive) {
            peripheral = CBPeripheralManager(delegate:self, queue:nil)
        }
    }
    
    func stop() {
        if(IsActive) {
            if (peripheral != nil){
                peripheral!.stopAdvertising()
            }
            IsActive = false
        }
    }

The code for the class I used for simulating the iBeacon follows. For the simplest use case, just instantiate the class and call the start() method. Provided the Info.plist has been populated with a value for NSBluetoothAlwaysUsageDescription and the user has granted permission, it should just work. In the next post, lets look at how to detect iBeacons with an iOS application. The next application isn’t limited to only detecting iPhones acting as iBeacons. It will work with real iBeacons too. As of now I have gotten my hands on a physical iBeacon compatible transmitter. While any iBeacon transmitter should work, if you would like to follow along with the same iBeacon that I am using, you can purchase the following from Amazon (affiliate link).

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

High Resolution Video Capture on the Raspberry Pi

I’ve previously talked about video capture on the Raspberry Pi using an HDMI device that interfaced with the camera connector. Today I’m looking at using USB capture devices. USB capture devices often present as web cams. The usual software and techniques that you may use with a webcam should generally work here.

The two devices that I have are the Elgato CamLink 4K and the Atmos Connect. Despite the name, the CamLink 4K does not presently work in 4K mode for me on the Pi. (I’m not sure that the pi would be able to handle that even if it did). I am using a Raspberry Pi 4. I got better results with the Atmos Connect; it is able to send the Pi pre-compressed video so that the pi didn’t have to compress it.

Hardware setup is simple. I connected the USB capture device to the pi and connected an HDMI source to the capture device. If you want to be able to monitor the video while it is captured you will also need an HDMI splitter; the Pi does not show video while it is being captured. Most of what needs to be done will be in the Raspberry Pi terminal.

If you want to ensure that your capture device was detected you can use the lsusb command. This command lists all the hardware detected on the USB port. If you can’t recognize a device there, using it, disconnecting a device, using it again, and noting the difference will let you know identify a line in the output to an item of hardware. Trying first with the Elgato CamLink, my device was easily identified. There was an item labeled as Elgato Systems GmbH.

I’ve not been able to make the devices work with raspistill or raspivid, but it works with ffmpeg and video for linux (v4l-utils). To install Video 4 Linux, use the following command.

sudo apt install v4l-utils

Once Video 4 Linux is installed, you can list the devices that it detects and the device file names.

v4l2-ctl --list-devices

In addition to hardware encoders, I get the following in my output.

Cam Link 4K: Cam Link 4K (usb-0000:01:00.0-2):
          /dev/video0
          /dev/video1

The device name of interest is the first, /dev/video0. Using that file name, we can check what resolutions that it supports with ffmpeg.

$ ffmpeg -f v4l2 -list_formats all -i /dev/video0

[video4linux2,v4l2 @ 0xcca1c0] Raw     :    yuyv422:      YUYV 4:2:2 : 1920x1080
[video4linux2,v4l2 @ 0xcca1c0] Raw     :       nv12:      YUYV 4:2:2 : 1920x1080
[video4linux2,v4l2 @ 0xcca1c0] Raw     :   yuyv420p:      YUYV 4:2:0 : 1920x1080

For this device, the only resolution supported in 1920×1080. I have three options for my format, yuyv422m nv12, and yuyv420p. Before we start recording, let’s view the input. With the following command, we can have ffmpeg read from our video capture device and display the HDMI video stream on the screen.

ffplay -f v4l2 -input_format nv12 -video_size 1920x1080 -framerate 30 -i /dev/video0

In your case, the format (nv12), resolution, and hardware file name may need to be different. If all goes well after running that command, you will see the video stream. Let’s have the video dump to a file now.

ffmpeg -f v4l2 -thread_queue_size 1024 -input_format nv12 -video_size 1920x1080 -framerate 30 -i /dev/video0 -f pulse -thread_queue_size 1024 -i default -codec copy output.avi

This command will send the recorded data to an AVI file. AVI is one of the envelope formats in which audio, video, and other data can be packaged together. You will probably want to convert this to a more portable format. We can also use ffmpeg to convert the output file from AVI to MP4. I’m going to use H264 for video encoding and AAC for audio encoding.

ffmpeg -i output.avi  -vcodec libx264 -acodec aac -b:v 2000k -pix_fmt yuv420p output.mp4

You can find an audio entry on this blog post on Spotify!

Those of you that follow me on Instagram, you may have seen a picture of the equipment that I use to record the video walkthrough of how to do this. A list of those items used is below. Note that these are Amazon Affiliate links.

BlackMagic Designs ATEM
Mini
Mini Pro
Mini Pro Iso

Visual Studio 2022 Release Date

Visual Studio 2022 has been available in test form for a while. I’ve got the release candidate running now. But there is now a release date available for it. If you are looking to upgrade to VS 2022, your wait will be over on November 8, 2021. On this day Microsoft is holding a launch event for Visual Studio 2022. This isn’t just a software release, but also will have demonstrations on what Visual Studio 2022 is bringing.

Visual Studio 2022 brings greater support/integration with GitHub (Microsoft agreed to purchase GitHub back in 2018), code editor, and debugging improvements. The range of new feature touch the areas of WPF, WinForms, WinUI, ASP.NET, and areas not traditionally thought of as Windows specific, such as cross-platform game technologies, developing applications for Mac, and apps for Linux.

The fun starts on November 8, 8:30AM PDT. Learn more here.

#VS2022


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Simple HTTP Server in .Net

.Net and .Net Core both already provide fully functional HTTPS servers, as does IIS on Windows. But I found the need to make my own HTTP server in .Net for a plugin that I was making for a game (Kerbal Space Program, specifically). For my scenario, I was trying to limit the assemblies that I needed to add to the game to as few as possible. So I decided to build my own instead of adding references to the assemblies that had the functionality that I wanted.

For the class that will be my HTTP server, there are only two member variables needed. One to hold a reference to a thread that will accept my request and another for the TcpListener on which incoming requests will come.

Thread _serverThread = null;
TcpListener _listener;

I need to be able to start and top the server at will. For now, when the server stops all that I want it to do is terminate the thread and release any network resources that it had. In the Start function, I want to create the listener and start the thread for receiving the requests. I could have the server only listen on the loopback adapter (localhost) by using the IP address 127.0.0.1 (IPV4) or :1 (IPV6). This would generally be preferred unless there is a reason for external machines to access the service. I’ll need for this to be accessible from another device. Here I will use the IP address 0.0.0.0 (IPV4) or :0 (IPV6) . I’ll be using

public void Start(int port = 8888)
{
    if (_serverThread == null)
    {
        IPAddress ipAddress = new IPAddress(0);
        _listener = new TcpListener(ipAddress, 8888);
        _serverThread = new Thread(ServerHandler);
        _serverThread.Start();
    }
}

public void Stop()
{
    if(_serverThread != null)
    {
        _serverThread.Abort();
        _serverThread = null;
    } 
}

The TcpListener has been created, but it isn’t doing anything yet. The call to have it listen for a request is a blocking request. The TcpListener will start listening on a different thread. When a request comes in, we read the request that was sent and then send a response. I’ll read the entire response and store it in a string. But I’m not doing anything with the request just yet. For the sake of getting to something that functions quickly, I’m going to hardcode a response

String ReadRequest(NetworkStream stream)
{
    MemoryStream contents = new MemoryStream();
    var buffer = new byte[2048];
    do
    {
        var size = stream.Read(buffer, 0, buffer.Length);
        if(size == 0)
        {
            return null;
        }
        contents.Write(buffer, 0, size);
    } while (stream.DataAvailable);
    var retVal = Encoding.UTF8.GetString(contents.ToArray());
    return retVal;
}

void ServerHandler(Object o)
{
    _listener.Start();
    while(true)
    {
        TcpClient client = _listener.AcceptTcpClient();
        NetworkStream stream = client.GetStream();

        try
        {
            var request = ReadRequest(stream);

            var responseBuilder = new StringBuilder();
            responseBuilder.AppendLine("HTTP/1.1 200 OK");
            responseBuilder.AppendLine("Content-Type: text/html");
            responseBuilder.AppendLine();
            responseBuilder.AppendLine("<html><head><title>Test</title></head><body>It worked!</body></html>");
            responseBuilder.AppendLine("");
            var responseString = responseBuilder.ToString();
            var responseBytes = Encoding.UTF8.GetBytes(responseString);

            stream.Write(responseBytes, 0, responseBytes.Length);

        }
        finally
        {
            stream.Close();
            client.Close();
        }
    }
}

To test the server, I made a .Net console program that instantiates the server.

namespace TestConsole
{
    class Program
    {
        static void Main(string[] args)
        {

            var x = new HTTPKServer();
            x.Start();
            Console.ReadLine();
            x.Stop();
        }
    }
}

I rant the program and opened a browser to http://localhost:8888. The web page shows the response “It worked!”. Now to make it a bit more flexible. The logic for what to do with a response will be handled elsewhere. I don’t want it to be part of the logic for the server itself. I’m adding a delegate to my server. The delegate function will receive the request string and must return the response bytes that should be sent. I’ll also need to know the Mime type. I’ve made a class for holding that information.

public class Response
{
    public byte[] Data { get; set; }
    public String MimeType { get; set; } = "text/plain";
}

public delegate Response ProcessRequestDelegate(String request);
public ProcessRequestDelegate ProcessRequest;

I’m leaving the hard coded response in place, though I am changing the message to say that no request processor has been added. Generally, the code is expecting that the caller has registered a delegate to perform request processing. If it has not, then this will server as a message to the developer.

The updated method looks like the following.

void ServerHandler(Object o)
{
    _listener.Start();
    while(true)
    {
        TcpClient client = _listener.AcceptTcpClient();
        NetworkStream stream = client.GetStream();

        try
        {
            var request = ReadRequest(stream);

            if (ProcessRequest != null)
            {
                var response = ProcessRequest(request);
                var responseBuilder = new StringBuilder();
                responseBuilder.AppendLine("HTTP/1.1 200 OK");      
                responseBuilder.AppendLine("Content-Type: application/json");
                responseBuilder.AppendLine($"Content-Length: {response.Data.Length}");
                responseBuilder.AppendLine();

                var headerBytes = Encoding.UTF8.GetBytes(responseBuilder.ToString());

                stream.Write(headerBytes, 0, headerBytes.Length);
                stream.Write(response.Data, 0, response.Data.Length);
            }
            else
            {
                var responseBuilder = new StringBuilder();
                responseBuilder.AppendLine("HTTP/1.1 200 OK");
                responseBuilder.AppendLine("Content-Type: text/html");
                responseBuilder.AppendLine();
                responseBuilder.AppendLine("<html><head><title>Test</title></head><body>No Request Processor added</body></html>");
                responseBuilder.AppendLine("");
                var responseString = responseBuilder.ToString();
                var responseBytes = Encoding.UTF8.GetBytes(responseString);

                stream.Write(responseBytes, 0, responseBytes.Length);
            }
        }
        finally
        {
            stream.Close();
            client.Close();
        }
    }
}

The test program now registers a delegate. The delegate will show the request and send a response that is derived by the time. I’m marking the response as a JSON response.

static Response ProcessMessage(String request)
{
    Console.Out.WriteLine($"Request:{request}");
    var response = new HTTPKServer.Response();
    response.MimeType = "application/json";
    var responseText = "{\"now\":" + (DateTime.Now).Ticks + "}";
    var responseData = Encoding.UTF8.GetBytes(responseText);
    response.Data = responseData;
    return response;

}

static void Main(string[] args)
{

    var x = new HTTPServer();
    x.ProcessRequest = ProcessMessage;
    x.Start();
    Console.ReadLine();
    x.Stop();
}
}

I grabbed my iPhone and made a request to the server. From typing the URL in there are actually two requests; there is a request for the URL that I typed for for an icon for the site.

Request:
Request:GET /HeyYall HTTP/1.1
Host: 192.168.50.79:8888
Upgrade-Insecure-Requests: 1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/91.0.4472.80 Mobile/15E148 Safari/604.1
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: keep-alive


Request:GET /favicon.ico HTTP/1.1
Host: 192.168.50.79:8888
Connection: keep-alive
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/91.0.4472.80 Mobile/15E148 Safari/604.1
Accept-Encoding: gzip, deflate
Accept-Language: en,en-US;q=0.9,ja-JP;q=0.8

To make sure this work, I loaded it into my game and made a request. The request was successful and I am ready to move on to implementing the logic that is needed for the game.

Why Your Computer Might not Run Windows 11

As of October 5, 2021, the RTM version of Windows 11 is available for download. I’ve downloaded it and have tried to install it on a range of machines. In doing this, the first thing that stands out is that there are a lot of otherwise capable machines in existence today that may not be able to run the new operating system. There are three requirements that tended to be the obstacles on the computers on which I did not install Windows 11.

  • Secure Boot Enabled
  • TPM 2.0
  • Unsupported Processor

Because of the Secure Boot and the TPM requirements, I found that I could not install Windows 11 on my Macs using Bootcamp. Other guides that I’ve found all have Windows 11 on Mac being installed within a virtual machine. The unsupported processor issue was not expected. One of the computers on which the installation failed to install has a Xeon 3.0 GHz processor with 16 cores and an RTX 3090 video card. This computer has 64 gigs of ram, 2 terabytes of M.2 storage, and a few terabytes on conventional drives. But its processor is not supported. If your computer matches the Windows 11 requirements on paper, that doesn’t give assurance that it is actually compatible. If you want to test for yourself, the best method is to run the Windows 11 upgrade advisor.

Microsoft is using virtualization-based security (VBS) to protect processes in Windows 11. There is a sub-feature of this called Hypervisor-Protected Code Integrity (HVCI) that prevents code injection attacks. Microsoft has said that computer’s with processors that support this feature have a 99.8% crash free experience (source). For purposes of reliability and security, Microsoft has decided that this feature will be part of the baseline for Windows 11. Back in August, Microsoft made a blog post stating that they found some older processors that met their requirements and would be adding them to their compatibility list. There’s a chance that a computer that shows as no compatible today could show as compatible later.

A Windows feature that I’ve enjoyed using is Windows 2 GO. With W2G, a Windows environment is installed on a USB drive specifically made for this feature (it would not quite work on a regular drive). The W2G drive could be plugged into any PC or Intel based Mac as a boot drive. It was a great way to port an environment around to use as an emergency drive. Microsoft discontinued support of the feature some time ago. But it still worked. With Windows 11, this feature is effectively dead.

You can find both the upgrade advisor and the Windows 11 download at the following link.

https://www.microsoft.com/en-us/software-download/windows11

Windows 11 offers a a lot of consumer focused features and a new look. But my interest is in the new APIs that the OS provides. Microsoft has extended the DirectX APIs, especially in Composition and DirectDisplay. The Bluetooth APIs have extended support for low energy devices. And there is now support for haptic pen devices and more phone control.

I was already on the market for a new laptop. I’ll be getting another computer that runs Windows 11 soon enough. But in the mean time, I’ve moved one of my successful installs to my primary work area so that I can try it out as my daily driver. More to come…


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Conferences and a Hearings, Sept-Oct 2021

During the month of October, there are a couple of developer conferences happening. Samsung is resuming what had been their regular Developers conference (there wasn’t one in 2020, for obvious reasons). Like so many other conferences, this one is going to be online on 26 October. Details of what will be in it haven’t been shared yet, but I noticed a few things from the iconography of their promotional video.

he Tizen logo is present, specifically on a representation of a TV. It looks that Samsung has abandoned the Tizen OS for anything else. They generally don’t make an announcement that they are sunsetting a technology and instead opt to quietly let it disappear. A few months ago Google made the ambiguous announcement that Samsung and Google were combining their wearable operating systems into a single platform while not directly saying that Tizen was going away. Just before the release of the Gear 4 watch (which runs Android Wear, not Tizen) Samsung made an announcement that they were still supporting Tizen. But with no new products on the horizon and the reduction in support in the store, this looks more like a phased product sunset.

Some of the other products suggested by the imagery include wearables, Smart Things (home automation), Bixby (voice assistant) and Samsung Health.

October 12-14, Google is hosting their Cloud Next conference. Registration for this conference is open now, and available at no cost. Google has made the session catalog available. The session categories include AI/Machine Learning, Application Development, Security, and more.

Sessions available at https://cloud.withgoogle.com/next

And last, if you have an interest in the USA’s developing responses to technology issues, this Thursday the Senate Committee on Commerce, Science, and Transportation is holding a hearing with Facebook’s head of safety over some recent reports published by the Wall Street Journal about the impact of it’s apps on younger audiences. The hearing (with live stream) will be Thursday, September 30, 2021 at 10:30am EDT. The livestream will be available at www.commerce.senate.gov.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Sharing Resources With Your Chromebook Linux Container

If you have installed Linux on your Chromebook, you may have noticed that the file system as viewed from the files application on your Chromebook and the file system in the Linux terminal do not look alike. This is because Linux is running within a container. There are two ways to share files between your Chromebook and the Linux container.

If you open the Files application on your Chromebook, you will see an area called Linux Files. This lets you get access to files in your Linux home directory and read them from Linux doesn’t have immediate access to the files on the Chromebook. To access these, you need to explicitly share the folder with Linux. From the Files application find the folder that you want to share. Right-click on it and select “Share with Linux.” From within Linux if you navigate to the path /mnt/chromeos you will see sub-folders that are mount-points for each of the folders from Chrome that you’ve shared.

You can also share USB drives with Linux. By default, they are not available. If you open Settings and look for “Manage USB Devices” the USB drives that are connected to your machine will be listed. You can select a drive to share with Linux from there. Note that when you disconnect the drive, the next time that it is reconnected it will not automatically be shared.

The Linux container’s ports are also not exposed to your network by default. For the ports to be visible to other devices on your network, you must explicitly share them. Under settings if you look for “Port Forwarding” you will be taken to an interface where you can specify the ports that will be exposed. Note that you can only add ports in the range of 1024 to 65,535.