Connecting to AWS IoT’s MQTT Server with .Net

A client I was working with wanted communication between systems for a solution to use AWS IoT for broadcasting messages among the computers making up the solution and controlling various computers. I worked on the solution for the system, but had minimal access to their AWS resources, which is consisten with their security policies. Usually, if I have access to the AWS subscription I could use an MQTT viewer that is part of the service for performing some diagnostic tasks. With this client, I didn’t have that access and had to make my own viewer when performing diagnostics.

Making a viewer is pretty easy once you have the resources that you need. I chose WPF because of the speed at which a functional UI could be build along with M2Mqtt as an MQTT client. Before writing any code, there was some work that needed to be done with the certificates. When accessing an AWS MQTT Instance, the information that you will need includes the domain name for the MQTT instance, an Amazon root certificate, a certificate and private key file. You’ll need this information packaged in a pfx file to easily use it with M2Mqtt. Packaging these certificates that way is just a matter of running a command. For the files I had, let’s use these names.

  • 000-certificate.pem.crt – The certificate file for the AWS MQTT Instance
  • 000-private.pem.key – The private key for the AWS MQTT instance
  • AmazonRootCA1.cer – The AWS root certificate

Using those file names, the command that I used was as follows.

openssl pkcs12 -export -in .\000-certificate.pem.crt -inkey .\000-private.pem.key -out certificate.pfx -certfile .\AmazonRootCA1.cer

After this command runs, I have a file named certificate.pfx. I’ll be using this in my .Net viewer.

In the interest of keeping the program reusable, I’ve placed information on the paths to the certificate files in the applications settings. If I needed to change these files post-compilation, they are in a JSON file. These settings include the following.

  • BrokerDomainName – The domain name of the MQTT Instance that the application connects to
  • BrokerCertificatePath – The relative file path to the
  • BrokerRootCertificatePath – The relative path to the AWS root certificate
  • BrokerPort – the port that the MQTT Instance is using. Amazon uses port 883
  • ClientPrefix – Prefix for the name that will be used for identifying this client.
  • DefaultTopic – The topic that the client will automatically subscribe to upon starting

Concerning the client prefix, every client is identified by a unique string. To ensure the string is unique, I am generating a GUID when the application starts. It’s possible that someone starts more than one instance of the program. For this reason I don’t persist the GUID. A second instance of the program will have a different GUID. But I still want the string to be recognizable as having come from this program. The ClientPrefix string puts a recognizable string before the GUID to satisfy this need.

The paths in the settings are relative to the application. To load the certificates using these paths and to generate a new client name, I use the following code.

Settings ds = Settings.Default;
var rootCertificatePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ds.BrokerRootCertificatePath);
var deviceCertificatePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ds.BrokerCertificatePath);
var rootCertificate = X509Certificate.CreateFromCertFile(rootCertificatePath);
var deviceCertificate = X509Certificate.CreateFromCertFile(deviceCertificatePath);
clientName = $"{ds.ClientPrefix}-${Guid.NewGuid().ToString()}";

With that, we have all the information that we need for connecting to the MQTT broker. We can instantiate a MqttClient, subscribe to it’s events, and start showing the message topics. The call to establish a connection is a blocking call. When it returns, a connection will have been established.

client = new MqttClient(ds.BrokerDomainName, ds.BrokerPort, true, rootCertificate, deviceCertificate, MqttSslProtocols.TLSv1_2);
client.MqttMsgSubscribed += Client_MqttMsgSubscribed;            
client.MqttMsgPublishReceived += Client_MqttMsgPublishReceived;
try
{
    client.Connect(clientName);

    var defaultTopic = ds.DefaultTopic;
    if (!String.IsNullOrEmpty(defaultTopic))
    {
        client.Subscribe(new string[] { defaultTopic }, new byte[] { MqttMsgBase.QOS_LEVEL_AT_LEAST_ONCE });
    }
} catch(Exception ex)
{
    MessageBox.Show(ex.Message, "Could not connect");
}

The primary information of concern in a message is the topic. The message may also have a payload. I capture both of those items. It is possible that the payload contains data that isn’t a string. I’ve decided not to show it, but captured it anyway should I change my mind. To hold the information I use the following class.

public class ReceivedMessage: ViewModelBase
{

    private string _topic;
    public String Topic { 
        get { return _topic; }
        set
        {
            SetValueIfChanged(() => Topic, () => _topic, value);
        }
    }

    private byte[] _payload;
    public byte[] Payload
    {
        get { return _payload; }
        set
        {
            SetValueIfChanged(() => Payload, () => _payload, value);
        }
    }

    DateTimeOffset _timestamp = DateTimeOffset.Now;
    public DateTimeOffset Timestamp
    {
        get { return _timestamp; }
        set
        {
            SetValueIfChanged(()=>Timestamp, () => _timestamp, value);
        }
    }
}

The base class from which this derives, ViewModelBase, is something I’ve talked about before. See this post if you need more information on how this base class allows this class to be bound to the UI. As new messages come in, I add their content to instances of ReceivedMessage and add them to an ObservableCollection. The collection is bound to a ListView on the UI. The entirety of the main UI window declaration follows. This UI uses only the default code for its code-behind.

<Window x:Class="MessageWatcher.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:vm="clr-namespace:MessageWatcher.ViewModels"
        xmlns:local="clr-namespace:MessageWatcher"
        mc:Ignorable="d"
        Title="MainWindow" Height="450" Width="800">

    <Grid>
        <Grid.DataContext>
            <vm:MainViewModel />
        </Grid.DataContext>

        <ListView ItemsSource="{Binding MessageList}">
            <ListView.ItemTemplate>
                <DataTemplate>
                    <Grid>
                        <TextBlock Text="{Binding Topic}" />
                    </Grid>
                </DataTemplate>
            </ListView.ItemTemplate>
        </ListView>
    
    </Grid>
</Window>

With that, I had a working viewer for monitoring the messages as they went by.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Updated ViewModelBase for my WPF Projects

When I’m working on projects that use WPF, there’s a set of base classes that I usually use for classes that are bindable to the UI. Back in 2016 I shared the code for these classes. In 2018, I shared an updated version of these classes for UWP. I have another updated version of these classes that I’m sharing in part because it is used in some code for a future post.

For those unfamiliar, the WPF derived Windows UI technologies support binding UI elements to code classes so that the UI is automatically updated with a value in the class changes. Classes that are bound to the UI must implement the INotifyPropertyChanged interface. This interface defines a single event of type PropertyChangedEventHandler. This event is raised to indicate that a property on the class has changed. The event data contains the name of the class member that has changed. UI elements bound to that member get updated in response.

The class that I’ve defined, ViewModelBase, eases management of that event. The PropertyChanged event expects the name of the property to be passed as a string. If the name of the property is passed directly as a string in code, there’s opportunity for bugs from mistyping the property name. Or, if a property name is changed, there is the possibility that a string value is not appropriately updated. To reduce the possibility of such bugs occurring the ViewModelBase that I’m using uses Expressions instead of strings. With a bit of reflection, I extract the name of the field/property referenced, which gaurantees that only valid values are used. If something invalid is passed it would result in failure at compile time. Using an expression allows Intellisense to be used which reduces the chance of typos. If a field or property name is changed through the VisualStudio IDE, these expressions are recognized too. While I did this in previous versions of this classes, it was necessary to make some updates to how it works based on changes in .Net.

Additionally, this version of the class can be used in code with multiple threads. It uses a Dispatcher to ensure that the PropertyChanged events are routed to the UI thread before being raised.

public abstract class ViewModelBase : INotifyPropertyChanged
{
    public ViewModelBase()
    {
        this.Dispatcher = Dispatcher.CurrentDispatcher;
    }
    [IgnoreDataMember]
    [JsonIgnore]
    public Dispatcher Dispatcher { get; set; }

    protected void DoOnMain(Action a)
    {
        Dispatcher.Invoke(a);
    }

    public event PropertyChangedEventHandler PropertyChanged;
    protected void OnPropertyChanged(String propertyName)
    {
        if (PropertyChanged != null)
        {
            Action a = () => { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); };
            if (Dispatcher != null)
            {
                Dispatcher.Invoke(a);
            }
            else
            {
                a();
            }
        }
    }

    protected void OnPropertyChanged(Expression expression)
    {
        OnPropertyChanged(((MemberExpression)expression).Member.Name);
    }

    protected bool SetValueIfChanged<T>(Expression<Func<T>> propertyExpression, Expression<Func<T>> fieldExpression, object value)
    {
        var property = (PropertyInfo)((MemberExpression)propertyExpression.Body).Member;
        var field = (FieldInfo)((MemberExpression)fieldExpression.Body).Member;
        return SetValueIfChanged(property, field, value);
    }

    protected bool SetValueIfChanged(PropertyInfo pi, FieldInfo fi, object value)
    {
        var currentValue = pi.GetValue(this, new object[] { });
        if ((currentValue == null && value == null) || (currentValue != null && currentValue.Equals(value)))
            return false;
        fi.SetValue(this, value);
        OnPropertyChanged(pi.Name);
        return true;
    }
}

Commands are classes that implement the ICommand interface. ICommand objects are bound to UI elements such as buttons and used for invoking code. This interfaces defines three members. There is an event named CanExecuteChange and two methods, CanExecuteChange() and Execute(). The Execute() method contains the code that is meant to be invoked when the user clicks on the button. CanExecute() returns true or false to indicate whether the command should be presently invokable. You may want a command to not be invokable until the user has satisfied certain conditions. For example, on a UI that uploads a file might want the upload button disabled until the user selected a file, and that file is a valid file. For that scenario, your implementation of CanExecute() would check if a file had been selected and if the file is valid and only return true if both conditions are satisfied. Or, if the command started a long running task for which only one instance is allowed, the CanExecute() function may only return true if there are now instances of the task running.

In this code, I’ve got a DelegateCommand class defined. This class has been around in its current form for longer than I remember. I believe it was originally in something that Microsoft provided, and it hasn’t changed much. To facilitate creating objects that contain the code, the DelegateCommand class acceptions an action (an action is a function that can be easily passed around as an object). The constructor for the class accepts either just an action to execute, or an action and a function used by CanExecute() (if no function is passed for CanExecute() it is assumed that the command can always be execute.

There are two versions of the DelegateCommand class. One is a Generic implementation used when the execute command must also accept a typed parameter. The non-generic implementation receives an Object parameter.

    public class DelegateCommand : ICommand
    {
        public DelegateCommand(Action execute)
            : this(execute, null)
        {
        }

        public DelegateCommand(Action execute, Func<object, bool> canExecute)
        {
            _execute = execute;
            _canExecute = canExecute;
        }

        public bool CanExecute(object parameter)
        {
            if (_canExecute != null)
                return _canExecute(parameter);
            return true;
        }

        public void Execute(object parameter)
        {
            _execute();
        }

        public void RaiseCanExecuteChanged()
        {
            if (CanExecuteChanged != null)
                CanExecuteChanged(this, EventArgs.Empty);
        }

        public event EventHandler CanExecuteChanged;

        private Action _execute;
        private Func<object, bool> _canExecute;
    }


    public class DelegateCommand<T> : ICommand
    {
        public DelegateCommand(Action<T> execute)
            : this(execute, null)
        {
        }

        public DelegateCommand(Action<T> execute, Func<T, bool> canExecute)
        {
            _execute = execute;
            _canExecute = canExecute;
        }

        public bool CanExecute(object parameter)
        {
            if (_canExecute != null)
            {

                return _canExecute((T)parameter);
            }

            return true;
        }

        public void Execute(object parameter)
        {
            _execute((T)parameter);
        }

        public void RaiseCanExecuteChanged()
        {
            if (CanExecuteChanged != null)
                CanExecuteChanged(this, EventArgs.Empty);
        }

        public event EventHandler CanExecuteChanged;

        private Action<T> _execute;
        private Func<T, bool> _canExecute;
    }

By itself, this code might not be meaningful. But I’ll be referencing it in future content, starting with a post scheduled for a couple of weeks after this one.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Mastering Windows Presentation Foundation

Auto-Syncing Node on Brightsign for Development

I’m working on a Node project that runs on the BrightSign hardware. To run a node project, one only needs to copy the project files to the root of a memory card and insert it into the device. When the device is rebooted (from power cycling) the updated project will run. For me, this deployment process, though simple, has a problem when I’m in the development phase. There’s no way for me to remotely debug what the application is doing, this results in a lot more trial-and-error to get something working correctly since there’s no access to the information of some errors. Usually when testing the effect of some code change I’m developing directly on a compatible system and can just press a run project see it execute. Copying the project to a memory card, walking over to the BrightSign, inserting the card, removing the power connector, and then reinserting the power connector takes longer. Consider that there are many cycles of this that must be done and you can see there is a productivity penalty.

In seeking a better way, I found an interface in the BrightSign diagnostic console that allows someone to update individual files. Navigating to the BrightSign’s IP address displays this console.

It doesn’t allow the creation of folders, though. After using the Chrome diagnostic console to figure out what endpoints were being hit I had enough information to update files through my own code instead of using the HTML diagnostic interface. Using this information, I was able to make a program that would copy files from my computer to the BrightSign over the network. This functionality would save me some back-and-forth in moving the memory card. I still needed to perform the initial copy manually so that the necessary subfolders were in place. There’s a limit of 10 MB per file when transferring files this way. This means that media would have to be transferred manually. But after that, I could copy the files without getting up. This reduces the effort to running new code to power cycling the device. I created a quick .Net 7 core program. I used .Net core so that I could run this on my PC or my Mac.

The first thing that my program needs to do is collect a list of the files to be transferred. The program accepts a folder path that represents the contents of the root of the memory card and builds a list of the files within it.

public FileInfo[] BuildFileList()
{
    Stack<DirectoryInfo> directoryList = new Stack<DirectoryInfo>();
    List<FileInfo> fileList = new List<FileInfo>();
    directoryList.Push(new DirectoryInfo(SourceFolder));

    while(directoryList.Count > 0)
    {
        var currentDirectory = new DirectoryInfo(directoryList.Pop().FullName);
        var directoryFileList = currentDirectory.GetFiles();
        foreach(var d in directoryFileList)
        {
            if(!d.Name.StartsWith("."))
            {
                fileList.Add(d);
            }
        }                
        var subdirectoryList = currentDirectory.GetDirectories();
        foreach(var d in subdirectoryList)
        {
            if(!d.Name.StartsWith("."))
            {
                directoryList.Push(d);
            }
        }
    }
    return fileList.ToArray() ;
}

To copy the files to the machine, I iterate through this array and upload each file, one at a time. The upload requests must include the file’s data and the path to the folder in which to put it. The root of the SD card is in the file path sd, thus all paths will be prepended with sd/. To build the remote folder path, I take the absolute path to the source file, strip off of front part of that path and replace it with sd/. Since I only need the folder path, I also strip the file name from the end.

public void UploadFile(FileInfo fileInfo)
{
    var remotePath = fileInfo.FullName.Substring(this.SourceFolder.Length);
    var separatorIndex = remotePath.LastIndexOf(Path.DirectorySeparatorChar);
    var folderPath= "sd/"+remotePath.Substring(0, separatorIndex);
    var filePath = remotePath.Substring(separatorIndex + 1);
    UploadFile(fileInfo, folderPath);
}

With the file parts separated, I can perform the actual file upload. It is possible the wrong path separator is being used since this can run on a Windows or *nix machine. I replace instances of the backslash with the forward slash. The exact remote endpoint to be used has changed with the BrightSign firmware version. I am using a November 2022. For these devices, the endpoint to write to is /api/v1/files/ followed by the remote path. On some older firmware versions, the path is /uploads.html?rp= followed by the remote folder path.

public void UploadFile(FileInfo fileInfo, string remoteFolder)
{
    if (!fileInfo.Exists)
    {
        return;
    }
    remoteFolder = remoteFolder.Replace("\\", "/");
    if (remoteFolder.EndsWith("/"))
    {
        remoteFolder = remoteFolder.Substring(0, remoteFolder.Length - 1);
    }
    if (remoteFolder.StartsWith("/"))
    {
        remoteFolder = remoteFolder.Substring(1);
    }

    String targetUrl = $"http://{BrightsignAddress}/api/v1/files/{remoteFolder}";
    var client = new RestClient(targetUrl);
    var request = new RestRequest();
    request.AddFile("datafile[]", fileInfo.FullName);
    try
    {
        var response = client.ExecutePut(request);
        Console.WriteLine($"Uploaded File:{fileInfo.FullName}");
        //Console.WriteLine(response.Content);
    }
    catch (Exception ect)
    {
        Console.WriteLine(ect.Message);
    }
}

I found if I tried to upload a file that already existed, the upload would fail. To resolve this problem I made a request to delete a file before uploading it. If the file doesn’t exists when the delete request is made, no harm is done.

        public void DeleteFile(string filePath)
        {
            filePath = filePath.Replace("\\", "/");
            if(filePath.StartsWith("/"))
            {
                filePath = filePath.Substring(1);
            }            
            string targetPath = $"http://{this.BrightsignAddress}/delete?filename={filePath}&delete=Delete";
            var client = new RestClient(targetPath);
            var request = new RestRequest();            
            try
            {
                var response = client.ExecuteGet(request);
                Console.WriteLine($"Deleted File:{filePath}");                
            }
            catch (Exception ect)
            {
                Console.WriteLine(ect.Message);
            }
        }

When I had the code this far, I was saved some time. To save even more I used the FileSystemWatcher to trigger my code when there was a change to any of the files.

          FileSystemWatcher fsw = new FileSystemWatcher(@"d:\\MyFilePath");
            fsw.NotifyFilter = NotifyFilters.CreationTime | NotifyFilters.FileName | NotifyFilters.Size | NotifyFilters.LastWrite;
            fsw.Changed += OnChanged;
            fsw.Created+= OnChanged;
            fsw.IncludeSubdirectories= true;
            fsw.EnableRaisingEvents= true;

My change handler uploads the specific file changed. It was still necessary to power-cycle the machine. But the diagnostic interface also had a button for rebooting the machine. With a little bit of probing, I found the endpoint for requesting a reboot. Instead of rebooting every time a file was updated, I decided to reboot several seconds after a file was updated. If another file is updated before this several seconds has passed, then I delay for more time. This way if there are several files being updated there is a chance for the update operation to complete before a reboot occurs.

static System.Timers.Timer rebootTimer = new System.Timers.Timer(5000);

public void Reboot()
{
    string targetPath = $"http://{this.BrightsignAddress}/api/v1/control/reboot";
    var client = new RestClient(targetPath);
    var request = new RestRequest();
    var response = client.ExecutePut(request);
}

static void OnChanged(object sender, FileSystemEventArgs e)
{
    var f = new FileInfo(e.FullPath);
    s.DeleteFile(f);
    s.UploadFile(f);
    rebootTimer.Stop();
    rebootTimer.Start();
}

Now, as I edit my code the BrightSign is being updated. When I’m ready to see something run, I only need to wait for a reboot. The specific device I’m using takes 90 seconds to reboot. But during that time I’m able to remain being productive.

There is still room for improvement, such as through doing a more complete sync and removing files found on the remote BrightSign that are not present in the local folder. But this was something that I quickly put together to save some time. It made more sense to have an incomplete but adequate solution to save time. Creating this solution was only done to save time on my primary tasks. Were I to spend too much time with it then it is no longer a time saver, but a productivity distraction.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

SD Pro Plus Micro SD XC memory card

Working With Alternative Data Streams::The “Hidden” Part of Your Windows File System on Windows

In the interest of keeping a cleaner file system, I sometimes try to minimize the number of files that I need to keep data organized. A common scenario where this goal is expressed is when writing an application that must sync with some other data source such as a content management system. Having a copy of the files from another system isn’t always sufficient. Sometimes additional data is needed for keeping the computers in a solution in sync. For a given file, I may need to also track an etag, CRC, or information on the purpose of a file. There are some common solutions for organizing this data. One is to have one additional file that contains all of the additional meta data for the files being synced. Another is to make an additional data file for each content file that contains this information. The solution that I prefer isn’t quite either of these. I prefer to have the data within an “alternate stream” of the same file. If the file get’s moved elsewhere on the filesystem, the additional data will move with it.

This is very much a Windows-Only solution. This will only work on the NTFS file system. If you attempt to access an alternative stream on a FAT32 file system, it will fail since that file system does not support them.

I most recently used this system of organization when I inherited a project that was, in my opinion, built with the wrong technology. In all fairness, many features of the application in question were implemented through scope creep. I reimplemented the application in about a couple of days using .Net technologies (this was much easier for me to do since, unlike the original developers, I had the benefit of having a complete requirements). There were a lot of aspects of that project that will be expressed in posts in the coming weeks.

The reason that I call this feature “hidden” is because the Windows UI does not give any visual indicator that a file has an additional data stream. There is no special icon. If you check on the size of a file, it will only show you the size of the main data stream. (In theory, one could add 3 gigs of data to the secondary stream of a 5 byte file and the OS would only report the file as being 5 bytes in size).

I’ll demonstrate accessing this stream from within the .Net Framework. There’s not built-in support for the alternative datastreams, but there is built in support for Windows file handles. Using a P/Invoke you can get a Windows file handle and then pass it to a .Net FileStream object to be used with all the other .Net features.

Every file has a default data stream. This is the stream you would see as being the normal file. It contains the file data that you are usually working with. With our normal concept of files and directories, files contain data and directories contain files, but no data directly. A file can contain any number of alternative data streams. Each one of these streams has a name of your choosing. Directories can have alternative data streams too!

To start experimenting with streams, you only need the command prompt. Open the command prompt and navigate to a directory in which you will place your experimental streams. type the following.

echo This is data for my alternative stream > readme.txt:stream

If you get a directory listing, we find that the files is listed as zero bytes in size.

c:\temp\streams>echo This is data for my alternative stream > readme.txt:stream

c:\temp\streams>dir
 Volume in drive C has no label.
 Volume Serial Number is 46FF-0556

 Directory of c:\temp\streams

10/11/2021  11:22 AM    <DIR>          .
10/11/2021  11:22 AM    <DIR>          ..
10/11/2021  11:22 AM                 0 readme.txt
               1 File(s)              0 bytes
               2 Dir(s)  108,162,564,096 bytes free

c:\temp\streams>

Is the data really there? From the command line, we can view the data using the more command (the type command doesn’t accept the syntax needed to refer to the stream).

c:\temp\streams>more < readme.txt:stream
This is data for my alternative stream

Windows uses alternative data streams for various system purposes. There are a number of names that you may encounter in files that Windows manages. This is a list of some well known stream names.

  • $DATA – This is the default stream. This stream contains the main (regular) data for a file. If you open README.TXT, this has the same effect as opening README.TXT:$DATA.
  • $BITMAP – data used for managing a b-tree for a directory. This is present on every directory.
  • $ATTRIBUTE_LIST – A list of attributes for a file.
  • $FILE_NAME – Name of file in unicode characters, including short name and hard links
  • $INDEX_ALLOCATION – used for managing large directories

There are some other names. In general, with the exception of $DATA, I would suggest not altering these streams.

Windows does give you the ability list alternative streams through PowerShell. We will look at that in a moment. For now, let’s say you had to make your own tool for managing such resources. The utility of this example is it is giving us an opportunity to see how we might work with these resources in code. One of the first tools that I think is useful would be a command line tool that would transfer data from one stream to another. With this tool, I can read from a stream and either write it to a console or to another stream. The only thing that affects where it is written is the file name. It only took a few minutes to write such a tool using C++. It is small enough to put the entirety of the code here.

#include <Windows.h>
#include <iostream>
#include <string>
#include <vector>
#include <list>

using namespace std;

const int BUFFER_SIZE = 1048576;

int main(int argc, CHAR ** argv)
{
    wstring InputStreamName = L"";
    wstring OutputStreamName = L"con:";
    wstring InputPrefix = L"--i=";
    wstring OutputPrefix = L"--o=";

    wstring Instructions =
        L"To use this tool, provide an input and an output file for it. The syntax looks like the following.\r\n\r\n"
        L"StreamStreamer --i=inputFileName --o=OutputFileName.ext::streamName\r\n\r\n";

    HANDLE hInputFile = INVALID_HANDLE_VALUE;
    HANDLE hOutputFile = INVALID_HANDLE_VALUE;

    vector<wstring> arguments(argc);

    for (auto i = 0; i < argc; ++i)
    {
        auto arg = string(argv[i]);
        arguments[i] = (wstring(arg.begin(), arg.end()));
    }

    for (int i = 0; i < argc; ++i)
    {
        if (!arguments[i].compare(0, InputPrefix.size(), InputPrefix))
            InputStreamName = arguments[i].substr(InputPrefix.size());
        if (!arguments[i].compare(0, OutputPrefix.size(), OutputPrefix))
            OutputStreamName = arguments[i].substr(OutputPrefix.size());
    }

    if ((!InputStreamName.size()) || (!OutputStreamName.size()))
    {
        wcout << Instructions;
        return 0;
    }

    hInputFile = CreateFile(InputStreamName.c_str(), GENERIC_READ, FILE_SHARE_READ, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
    if (hInputFile != INVALID_HANDLE_VALUE)
    {
        hOutputFile = CreateFile(OutputStreamName.c_str(), GENERIC_WRITE, FILE_SHARE_READ, 0, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
        if (hOutputFile != INVALID_HANDLE_VALUE)
        {
            vector<char> buffer = vector<char>(BUFFER_SIZE);
            DWORD bytes_read = 0;
            DWORD bytes_written = 0;
            do {
                bytes_read = 0;
                if (ReadFile(hInputFile, &buffer[0], BUFFER_SIZE, &bytes_read, 0))
                    WriteFile(hOutputFile, &buffer[0], bytes_read, &bytes_written, 0);
            } while (bytes_read > 0);
            CloseHandle(hOutputFile);
        }
        CloseHandle(hInputFile);
    }
}


Usage is simple. The tool takes two arguments; an input stream name and an output stream name are passed prefixed with either --i= or --o=. If no output name is specified, it defaults to an output name of con:. This name, con:, refers to a console. That had been a reserved file name for the console. I have the vague idea there may be some other console name, but could not find it. con: is a carry-over of the DOS days of 30+ years ago. It worked then, and it still works now. So I’m sticking with it. Note that there is

After compiling this, I can use it to retrieve the text that I attached to the stream earlier.

c:\temp\streams>StreamStreamer.exe --i=readme.txt:Stream
This is data for my alternative stream

c:\temp\streams>

I can also use it to take the contents of some other arbitrary file and attach it to an existing file in an alternative stream. In testing, I took a JPG I had of the moon and attached it to a file. Then I extracted it from that alternative stream and wrote it to a different regular file just to ensure that I had an unaltered data stream.

c:\temp\streams>StreamStreamer.exe --i=Moon.JPG --o=readme.txt:moon

c:\temp\streams>StreamStreamer.exe --i=readme.txt:moon --o=m.jpg

c:\temp\streams>dir *.jpg
 Volume in drive C has no label.
 Volume Serial Number is 46FF-0556

 Directory of c:\temp\streams


10/12/2021  02:54 PM         3,907,101 m.jpg
12/21/2020  07:46 PM         3,907,101 Moon.JPG
               3 File(s)      7,814,202 bytes
               0 Dir(s)  105,063,383,040 bytes free

c:\temp\streams>

You will probably want the ability to see what streams are inside of a file. You could download the Streams tool from System Internals, or you could use PowerShell. PowerShell has built in support for streams. I’ll be using that throughout out the rest of this writeup. To view streams with PowerShell, use the Get-Item command with the -stream * parameter.

PS C:\temp\streams> Get-Item .\readme.txt -stream *


PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt::$DATA
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt::$DATA
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\streams\readme.txt
Stream        : :$DATA
Length        : 15

PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt:moon.jpg
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt:moon.jpg
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\streams\readme.txt
Stream        : moon.jpg
Length        : 3907101

PSPath        : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams\readme.txt:stream
PSParentPath  : Microsoft.PowerShell.Core\FileSystem::C:\temp\streams
PSChildName   : readme.txt:stream
PSDrive       : C
PSProvider    : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
FileName      : C:\temp\s\readme.txt
Stream        : stream
Length        : 43



PS C:\temp\streams>

If you are making an application that uses alternative streams, you will want to know how to list the streams from within it also. That is also easy to do. Since the much beloved Windows Vista we’ve had a Win32 API for enumerating streams. The functions FindFirstStreamW/FindFirstStreamTransactedW and FindNextStreamW will do this for you. Take note that there only exist Unicode versions of these functions. ASCII variations are non-existent. If you have ever used FindFirstFile or FindNextStreamW the usage is similar.

Two variables are needed to search for streams. One variable is a HANDLE that is used as an identifier for the resources and state of the search request. The other is a WIN32_FIND_STREAM_DATA structure into which data on streams that were found are put. FindFirstStreamW will return a handle and populate a WIN32_FIND_STREAM_DATA with the first stream it finds. From there, each time FindNextStreamW is called with the HANDLE that had been returned earlier, it will populate a WIN32_FIND_STREAM_DATA with the information on the next stream. When no more streams are found, FindNextStreamW will have a return value of ERROR_HANDLE_EOF.

#include <Windows.h>
#include <iostream>
#include <vector>
#include <string>

using namespace std;

int main(int argc, char**argv)
{
    WIN32_FIND_STREAM_DATA fsd;
    HANDLE hFind = NULL;
    vector<wstring> arguments(argc);

    for (auto i = 0; i < argc; ++i)
    {
        auto arg = string(argv[i]);
        arguments[i] = (wstring(arg.begin(), arg.end()));
    }

    if (arguments.size() < 2)
        return 0;
    wstring fileName = arguments[1];

    try {
        hFind = FindFirstStreamW(fileName.c_str(), FindStreamInfoStandard, &fsd, 0);
        if (hFind == INVALID_HANDLE_VALUE) throw ::GetLastError();
        const int BUFFER_SIZE = 8192;
        WCHAR buffer[BUFFER_SIZE] = { 0 };
        WCHAR fileNameBuffer[BUFFER_SIZE] = { 0 };

        wcout << L"The following streams were found in the file " << fileName << endl;
        for (;;)
        {
            swprintf(fileNameBuffer, BUFFER_SIZE, L"%s%s", fileName.c_str(), fsd.cStreamName);
            swprintf_s(buffer,BUFFER_SIZE, L"%-50s %d", fileNameBuffer, fsd.StreamSize);
            wstring formattedDescription = wstring(buffer);
            wcout << formattedDescription << endl;

            if (!::FindNextStreamW(hFind, &fsd))
            {
                DWORD dr = ::GetLastError();
                if (dr != ERROR_HANDLE_EOF) throw dr;
                break;
            }
        }
    }
    catch (DWORD err)
    {
        wcout << "Oops, Error happened. Windows error number " << err;
    }
    if (hFind != NULL)
        FindClose(hFind);
}

For my actual application purposes, I don’t need to query the streams in a file. The streams of interest to me will have a predetermined name. Instead of querying for them, I attempt to open the stream. If it isn’t there, I will get a return error code indicating that the file isn’t there. Otherwise I will have a file HANDLE for reading and writing. With what I’ve written so far, you could begin using this feature in C/C++ immediately. But my target is the .Net Framework. How do we use this information there?

In Win32, you can read or write these alternative data streams as you would any other file by using the correct stream name. If you try that within the .Net Framework, it won’t work. Before even hitting the Win32 APIs, the .Net Framework will treat the stream name as an invalid file name. To work around this, you’ll need to P/Invoke the Win32 API for opening files. Thankfully, once you have a file handle, the .Net Framework will work with that file handle just fine and allow you to use all the methods that you would with any other stream.

Before adding the P/Invoke that are needed to use this functionality in .Net, let’s defined a few numerical constants.

    public partial class NativeConstants
    {
        public const uint GENERIC_WRITE = 1073741824;
        public const uint GENERIC_READ = 0x80000000;
        public const int FILE_SHARE_DELETE = 4;
        public const int FILE_SHARE_WRITE = 2;
        public const int FILE_SHARE_READ = 1;
        public const int OPEN_ALWAYS = 4;
    }

These may look familiar. These constants have the same names as constants that were used in C when calling the Win32 API. These constants, as their names suggest, are used to indicate the mode in which files should be opened. Now fot the P/Invokes to the calls to open the files.

    public partial class NativeMethods
    {
        [DllImportAttribute("kernel32.dll", EntryPoint = "CreateFileW")]
        public static extern System.IntPtr CreateFileW(
            [InAttribute()][MarshalAsAttribute(UnmanagedType.LPWStr)] string lpFileName,
            uint dwDesiredAccess,
            uint dwShareMode,
            [InAttribute()] System.IntPtr lpSecurityAttributes,
            uint dwCreationDisposition,
            uint dwFlagsAndAttributes,
            [InAttribute()] System.IntPtr hTemplateFile
        );

    }

That’s it! That is the only P/Invoke that is needed.

The data that I was writing to these files was metadata on on files for matching them up with entries in a CMS. This includes information like the last date that it was updated on the CMS, a CRC or ETAG for knowing if the version on the local computer is the same as the one on the CMS, and a title for presenting to the user (Which may be different than the file name itself). I’ve decided that the name of the stream in which I am placing this data to simply be meta. I’m using JSON for the data encoding. For your purposes, you could use any format that fits your application. Let’s open a stream for writing.

I’ll use the Win32 CreateFileW function to get a file handle. That handle is passed to the .Net FileStream constructor. From there, there is no difference in how I would read or write from this

var filePath = Path.Combine(ds.DownloadCachePath, $"{fe.ID}{extension}");
FileInfo fi = new FileInfo(filePath);
var fullPath = fi.FullName;
if (fi.Exists)
{
    var metaStream = NativeMethods.CreateFileW(
        $"{fullPath}:meta",
        NativeConstants.GENERIC_READ,
        NativeConstants.FILE_SHARE_READ,
        IntPtr.Zero,
        NativeConstants.OPEN_ALWAYS,
        0,
        IntPtr.Zero);
    using (StreamReader sr = new StreamReader(new FileStream(metaStream, FileAccess.Read)))
    {
        try
        {
            var metaData = sr.ReadToEnd();
            if (!String.IsNullOrEmpty(metaData))
            {
                var data = JsonConvert.DeserializeObject<FileEntry>(metaData);
                fe.LastModified = data.LastModified;
            }
        } catch(IOException exc)
        {
        }
    }
}

I said earlier that this is a Windows-Only solution and that it doesn’t work on the Fat32 file system. The two implications of this is that if you are using this in in a .Net environment that is running on another operating system this won’t work. It will likely fail since the P/Invokes won’t be able to bind. The other potential problem demands an active check within the code. If a a program using alternative file streams is given a FAT32 file system to work with, it should detect that it is on the wrong type of file system before trying to perform actions that will fail. Detecting the file system type only requires a few lines of code. In .Net, the following code will take the path of the currently running assembly, see what drive it is on, and retrieve the file system type.

 String assemblyPath = typeof(FileSystemDetector).Assembly.Location;
 String driveLetter = assemblyPath[0].ToString();
 DriveInfo driveInfo = new DriveInfo(driveLetter);
 string fsType = driveInfo.DriveFormat;
 return fsType;

If this code is run on a a drive using the NTFS file system, the return type will be the string value NTFS. If it is anything else, know that attempts to access alternative streams will fail. If you try to copy these file to a FAT32 drive, Windows will warn you of a loss of data. Only the default streams will be copied to the FAT32 drive.

In the next posts on this I will demonstrate a practical use. I’ll also talk about what some might see as a security concern with alternate file streams.

Simple HTTP Server in .Net

.Net and .Net Core both already provide fully functional HTTPS servers, as does IIS on Windows. But I found the need to make my own HTTP server in .Net for a plugin that I was making for a game (Kerbal Space Program, specifically). For my scenario, I was trying to limit the assemblies that I needed to add to the game to as few as possible. So I decided to build my own instead of adding references to the assemblies that had the functionality that I wanted.

For the class that will be my HTTP server, there are only two member variables needed. One to hold a reference to a thread that will accept my request and another for the TcpListener on which incoming requests will come.

Thread _serverThread = null;
TcpListener _listener;

I need to be able to start and top the server at will. For now, when the server stops all that I want it to do is terminate the thread and release any network resources that it had. In the Start function, I want to create the listener and start the thread for receiving the requests. I could have the server only listen on the loopback adapter (localhost) by using the IP address 127.0.0.1 (IPV4) or :1 (IPV6). This would generally be preferred unless there is a reason for external machines to access the service. I’ll need for this to be accessible from another device. Here I will use the IP address 0.0.0.0 (IPV4) or :0 (IPV6) . I’ll be using

public void Start(int port = 8888)
{
    if (_serverThread == null)
    {
        IPAddress ipAddress = new IPAddress(0);
        _listener = new TcpListener(ipAddress, 8888);
        _serverThread = new Thread(ServerHandler);
        _serverThread.Start();
    }
}

public void Stop()
{
    if(_serverThread != null)
    {
        _serverThread.Abort();
        _serverThread = null;
    } 
}

The TcpListener has been created, but it isn’t doing anything yet. The call to have it listen for a request is a blocking request. The TcpListener will start listening on a different thread. When a request comes in, we read the request that was sent and then send a response. I’ll read the entire response and store it in a string. But I’m not doing anything with the request just yet. For the sake of getting to something that functions quickly, I’m going to hardcode a response

String ReadRequest(NetworkStream stream)
{
    MemoryStream contents = new MemoryStream();
    var buffer = new byte[2048];
    do
    {
        var size = stream.Read(buffer, 0, buffer.Length);
        if(size == 0)
        {
            return null;
        }
        contents.Write(buffer, 0, size);
    } while (stream.DataAvailable);
    var retVal = Encoding.UTF8.GetString(contents.ToArray());
    return retVal;
}

void ServerHandler(Object o)
{
    _listener.Start();
    while(true)
    {
        TcpClient client = _listener.AcceptTcpClient();
        NetworkStream stream = client.GetStream();

        try
        {
            var request = ReadRequest(stream);

            var responseBuilder = new StringBuilder();
            responseBuilder.AppendLine("HTTP/1.1 200 OK");
            responseBuilder.AppendLine("Content-Type: text/html");
            responseBuilder.AppendLine();
            responseBuilder.AppendLine("<html><head><title>Test</title></head><body>It worked!</body></html>");
            responseBuilder.AppendLine("");
            var responseString = responseBuilder.ToString();
            var responseBytes = Encoding.UTF8.GetBytes(responseString);

            stream.Write(responseBytes, 0, responseBytes.Length);

        }
        finally
        {
            stream.Close();
            client.Close();
        }
    }
}

To test the server, I made a .Net console program that instantiates the server.

namespace TestConsole
{
    class Program
    {
        static void Main(string[] args)
        {

            var x = new HTTPKServer();
            x.Start();
            Console.ReadLine();
            x.Stop();
        }
    }
}

I rant the program and opened a browser to http://localhost:8888. The web page shows the response “It worked!”. Now to make it a bit more flexible. The logic for what to do with a response will be handled elsewhere. I don’t want it to be part of the logic for the server itself. I’m adding a delegate to my server. The delegate function will receive the request string and must return the response bytes that should be sent. I’ll also need to know the Mime type. I’ve made a class for holding that information.

public class Response
{
    public byte[] Data { get; set; }
    public String MimeType { get; set; } = "text/plain";
}

public delegate Response ProcessRequestDelegate(String request);
public ProcessRequestDelegate ProcessRequest;

I’m leaving the hard coded response in place, though I am changing the message to say that no request processor has been added. Generally, the code is expecting that the caller has registered a delegate to perform request processing. If it has not, then this will server as a message to the developer.

The updated method looks like the following.

void ServerHandler(Object o)
{
    _listener.Start();
    while(true)
    {
        TcpClient client = _listener.AcceptTcpClient();
        NetworkStream stream = client.GetStream();

        try
        {
            var request = ReadRequest(stream);

            if (ProcessRequest != null)
            {
                var response = ProcessRequest(request);
                var responseBuilder = new StringBuilder();
                responseBuilder.AppendLine("HTTP/1.1 200 OK");      
                responseBuilder.AppendLine("Content-Type: application/json");
                responseBuilder.AppendLine($"Content-Length: {response.Data.Length}");
                responseBuilder.AppendLine();

                var headerBytes = Encoding.UTF8.GetBytes(responseBuilder.ToString());

                stream.Write(headerBytes, 0, headerBytes.Length);
                stream.Write(response.Data, 0, response.Data.Length);
            }
            else
            {
                var responseBuilder = new StringBuilder();
                responseBuilder.AppendLine("HTTP/1.1 200 OK");
                responseBuilder.AppendLine("Content-Type: text/html");
                responseBuilder.AppendLine();
                responseBuilder.AppendLine("<html><head><title>Test</title></head><body>No Request Processor added</body></html>");
                responseBuilder.AppendLine("");
                var responseString = responseBuilder.ToString();
                var responseBytes = Encoding.UTF8.GetBytes(responseString);

                stream.Write(responseBytes, 0, responseBytes.Length);
            }
        }
        finally
        {
            stream.Close();
            client.Close();
        }
    }
}

The test program now registers a delegate. The delegate will show the request and send a response that is derived by the time. I’m marking the response as a JSON response.

static Response ProcessMessage(String request)
{
    Console.Out.WriteLine($"Request:{request}");
    var response = new HTTPKServer.Response();
    response.MimeType = "application/json";
    var responseText = "{\"now\":" + (DateTime.Now).Ticks + "}";
    var responseData = Encoding.UTF8.GetBytes(responseText);
    response.Data = responseData;
    return response;

}

static void Main(string[] args)
{

    var x = new HTTPServer();
    x.ProcessRequest = ProcessMessage;
    x.Start();
    Console.ReadLine();
    x.Stop();
}
}

I grabbed my iPhone and made a request to the server. From typing the URL in there are actually two requests; there is a request for the URL that I typed for for an icon for the site.

Request:
Request:GET /HeyYall HTTP/1.1
Host: 192.168.50.79:8888
Upgrade-Insecure-Requests: 1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/91.0.4472.80 Mobile/15E148 Safari/604.1
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: keep-alive


Request:GET /favicon.ico HTTP/1.1
Host: 192.168.50.79:8888
Connection: keep-alive
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/91.0.4472.80 Mobile/15E148 Safari/604.1
Accept-Encoding: gzip, deflate
Accept-Language: en,en-US;q=0.9,ja-JP;q=0.8

To make sure this work, I loaded it into my game and made a request. The request was successful and I am ready to move on to implementing the logic that is needed for the game.

Sending Text Message from .NET

SMS

I met with some others in Singapore for a project deployment, and one of the other people there wanted a snippet of information sent to him. He wanted it sent to his phone. He asked what the most popular messaging application is in the USA. While Facebook Messenger is the most popular the messaging landscape in the USA is fractured. One’s preferred messenger is going to depend on their preferences and social circle (Telegram is my favourite). Because of this SMS remains the most reliable way of sending short messages to someone. When I am traveling around the USA or in some environments (the subway, inside certain buildings) I use SMS because data service becomes incredibly spotty.

I am making an ASP.NET application for which I want to be able to send notifications to a phone with no application installation necessary.  I will be using SMS as my message transport of choice. How does an application do this? There are a few solutions. One is to subscribe to an SMS Gateway Service and use their API.  I found several companies that provide these services. But many of them do not have price information openly available; speaking to a sales agent is necessary to get the price. A few that I encountered did share their prices. I share what I found here. Note that prices are subject to change and the older this post is the more likely the pricing information is stale.

SMS Gateway Services

Vonage

Vonage offers SMS services starting at 0.0068 USD per message sent and and 0.0062 per message received. They also rent out virtual phone numbers at 0.98 USD per month. Verizon and US Cellular charge an additional fee for sending messages via long codes (0.0025 USD for Vzw and 0.0050 USD for US Cellular).  If you would like to try it out a trial level subscription is available for free. Vonage provides SDKs for several languages. They also have full code samples for each one of these languages.

  • Ruby
  • PHP
  • Python
  • .Net
  • NodeJS
  • Java
  • CLI

ClickSend

ClickSend offers a REST API for accessing their services. Like Vonage they also offer SDKs for several environments and code samples on how to perform various actions. You will find code samples and/or SDKs for

  • cURL
  • PHP
  • C#
  • Java
  • Node.JS
  • Ruby
  • Python
  • Perl
  • Go
  • Objective-C
  • Swift

Pricing is structured much differently. There are pricing tiers dependent on how many messages are sent. The pricing tiers look like the following.

  • 0.0214 USD/message up to 2,000 messages
  • 0.0153 USD/message for more than 2,000 messages
  • 0.0104 USD/message for 10k messages or more
  • 0.0076 USD/message for 100k messages or more
  • Unspecified pricing for more than 200k messages

A dedicated number is purchasable for 3.06 USD/month. Short codes cost 886.82 USD/month.

D7 Networks

D7 Networks pricing is  0.005 USD/month for sending SMS for one-way communication. For 2-way communication the fee is 2.00 USD/month + 0.012 USD/message for outbound messages and 0.00 USD for incoming messages.  While there is no SDK there is a REST API (which is easy enough to consume from most development environments).

E-mail

Provided someone can identify their service provider e-mail is an option. An advantage of using Email to send an SMS is that it is free. A downside is that when registering with your service the recipient must know the service provider. While I do not think that is a general problem there may be cases where someone does not know; such as when their phone is part of a family plan and someone else handles the service. Another possible issue is a person properly registers but later changes their service provider and does not update it within your service.

Why does a user need to know their service provider for this to work? The USA carriers have domains that that are used for sending e-SMS messages via e-mail. Appending this domain to the end of someone’s phone number results in the e-mail address that is routed to their phones.

  • AT&T – @mms.att.net
  • Boost Mobile – @myboostmobile.com
  • Cricket Wireless – @mms.cricketwireless.net
  • Google Project Fi – @msg.fi.google.com
  • Republic Wireless – @text.republicwireless.com
  • Sprint – @messaging.sprintpcs.com
  • Straight Talk – @mypicmessages.com
  • T-Mobile – @tmobile.net
  • Ting – @message.ting.com
  • Tracfone – @mmst5.tracfone.com
  • US Cellular – @mms.usc.net
  • Verizon – @vzwpix.com
  • Virgin Mobile – @vmpix.com

Sending messages in .Net is just a matter of using the SmtpClient class. The exact configuration that you need will be dependent on your mail service. For my mail service I have pre-configured the credentials in the computer’s credential store so that the information does not need to be stored in my application’s configuration.

SmtpClient mailClient = new SmtpClient("myMailServer.com")
{
     Credentials = CredentialCache.DefaultNetworkCredentials
};
var message = new MailMessage(
   "FromAddress@myserver.com", 
   "phoneMailAddress@carrierDomain.com", 
   "", //blank subject
   "test body"
);
mailClient.Send(message);
twitterLogofacebookLogo   youtubeLogo

j2i.net posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


PRO ASP.NET CORE 3


c# 8

Run .Net Core on the Raspberry Pi

The .NET framework acts as an intermediate execution environment; with the right runtime a .Net executable that was made on one platform can run on another. With .NET Core the focus on the APIs that are supported on a variety of platforms allows it to be supported on even more platforms. Windows, Linux, and macOS  are operating systems on which the .NET Core framework will run.

The .NET Core Framework also runs on ARM systems. It can be install on the Raspberry Pi. I’ve successfully installed the .NET CORE framework on the Raspberry Pi 3 and 4. Unfortunately it isn’t supported on the Raspberry Pi Zero; ARM7 is the minimum ARM version supported. The Pi Zero uses an ARM6 processor. Provided you have a supported system you can install the framework in a moderate amount of time. The instructions I use here assume that the Pi is accessed over SSH. To begin you must find the URL to the version of the framework that works on your device.

Visit https://dotnet.microsoft.com to find the downloads. The current version of the .NET Core framework is 3.1. The 3.1 downloads can be found here. For running and compiling applications the installation to use is for the .NET Core SDK. (I’ll visit the ASP.NET Core framework in the future). For the ARM processors there is a 32-bit and 64-bit download. If you are running Raspbian use the 32-bit version even if the target is the 64-bit Raspberry Pi; Raspbian is a 32-bit operating system. Since there is no 64-bit version yet the 32-bit .NET Core SDK installation is necessary. Clicking on the link will take you to a page where it will automatically download. Since I’m doing the installation over SSH I cancel the download and grab the direct download link. Once you have the link SSH into the Raspberry Pi.

You’ll need to download the framework. Using the wget command followed by the URL will result in the file being saved to storage.

wget https://download.visualstudio.microsoft.com/download/pr/ccbcbf70-9911-40b1-a8cf-e018a13e720e/03c0621c6510f9c6f4cca6951f2cc1a4/dotnet-sdk-3.1.201-linux-arm.tar.gz

After the download is completed it must be unpacked. The file location that I’ve chosen is based on the default location that some .NET Core related tools will look by default for the .NET Core framework. I am using the path /usr/share/dotnet. Create this folder and unpack the framework to this location.

sudo mkdir /usr/share/dotnet
sudo tar zxf dotnet-sdk-3.1.201-linux-arm.tar.gz -C /usr/share/dotnet

As a quick check to make sure it works go into the folder and try to run the executable named “dotnet.”

cd /usr/share/dotnet
./dotnet

The utility should print some usage information if the installation was successful.  The folder should also be added to the PATH environment variable so that the utility is available regardless of the current folder. Another environment variable named DOTNET_ROOT will also be created to let other utilities know where the framework installation can be found.  Open ~/.profiles for editing.

sudo nano ~/.profile

Scroll to the bottom of the file and add the following two lines.

PATH=$PATH:/usr/share/dotnet
DOTNET_ROOT=/usr/share/dotnet

Save the file and reboot. SSH back into the Pi and try running the command dotnet. You should get a response even if you are not in the /usr/share/dotnet folder.

With the framework installed new projects can be created and compiled on the Pi. To test the installation try making and compiling a “Hellow World!” program. Make a new folder in your home directory for the project.

mkdir ~/helloworld
cd ~/helloworld

Create a new console project with the dotnet command.

dotnet new console

A few new elements are added to the folder. If you view the file Program.cs the code has default to a hello world program. Compile the program with the dotnet command

dotnet build

After a few moments the compilation completes. The compiled file will be in ~/helloworld/Debug/netcoreapp31. A file name  helloworld is in this folder. Type it’s name to see it run.

My interest in the .NET Core framework is on using the Pi as a low powered ASP.NET Web server. In the next post about the .NET Core Framework I’ll setup ASP.NET Core and will host a simple website.

To see my other post related to the Raspberry Pi click here.

What is .Net

.NetFramework

I have some .Net related content that I plan to post and thought that I would revisit this question.

It’s a question I find interesting in that the answer has changed slightly over the year. In the earliest years it was a branding for technologies that were not necessarily related to each other; Windows .NET Server and Windows .NET Messenger are two products that had the branding at one point. But let’s not walk down memory lane and jump straight into the answer.

.NET is still a branding but the technologies with the branding are related to each other. Microsoft uses the branding on their Common Language Runtime (CLR) products. That answer only has kicked the can down the road. What is the CLR?

The CLR is a virtual machine component. Executables targeting the CLR don’t necessarily contain any code that is native to the processor on which they are running (though it may contain native code, but let’s ignore that for a moment). CLR binaries can be distributed with no processor dependent executable code within them. At runtime when the code is being executed it is converted to machine code as needed. Because of this the same program can be run on machines that have different processor architectures. The computer on which a program is running needs to have the runtime that is specific to it’s architecture and operating system.

This system might sound familiar as modern Java does something similar. There was a time when Microsoft was invested in Java virtual machines and made the first Java runtime that compiled the Java binary to machine language. The entity that owned Java at the time (Sun) wasn’t happy about this and they took Microsoft to court for deviating from the standard of how Java virtual machines worked and for using the Internet as a method of distribution among other reasons. This disagreement might sound petty, and in part it was. But there were good reasons for their position that I’ll present in another post. But this interaction added weight to the argument that Microsoft should have their own virtual machine. They also made their own programming languages (C# and Visual Basic .NET) and a few CLRs for x86, x64, and for their mobile devices.

The CLR, also known as the .Net Framework has seen several updates over the years. Microsoft eventually decided to make the CLR open source. This contributed to another CLR implementation being created named Mono which allowed .Net Framework applications to run on Linux and Mac.

If you look up .Net now you’ll find a few .NET systems listed.

  • .Net Standard
  • .Net Core (2016)
  • .Net Framework (2002)
  • ASP.Net / ASP.Net Core

What are these?

.Net Standard is a specification of the set of APIs that are expected to be in all implementations of the .Net Framework. Think of this as analogous to an interface; .NET standard itself isn’t an implementation. If you make an application that sticks with these APIs then it will have a wide range of compatible targets.

For the .Net Framework only one version of the framework can be installed at a time. Microsoft generally kept backwards compatibility, but it wasn’t perfect. Since a system could only have one version of the Framework installed in corporations updating the Framework had to be a company level decision.

.Net Core was made to contain the most common features of the .Net Framework, but has a few new features installed. It was made with multiple operating systems in mind and multiple versions in mind.  A system can have multiple versions of .Net core installed and they can run side-by-side.  From hereon Microsoft will be putting efforts on improving .Net Core. The .Net Framework will continue to support the .Net Framework but don’t expect to see new features in it; the new features will be coming to .Net core. There are a lot of legacy functionality from the .Net Framework that did not get ported over to .Net core in the interest of performance and compatibility.

ASP (Active Server Pages) is the name for Microsoft’s Web development system. Some of the earlier versions used a language that was similar to Visual Basic (yuck). The first version of ASP that supported .NET was called ASP.NET. ASP.NET used the .Net runtime and the more recent version supports the .NET Core runtime.  Traditionally ASP pages were hosted within IIS (Internet Information Services), a Windows component for hosting web pages. Wit the modern versions while this is still an option ASP.NET pages can be hosted outside of IIS too.

If you are starting a new desktop .Net project and don’t know what version to use the safe choice will generally be .Net Core. In my opinion the best feature is its ability to run on multiple systems (Mac, Linux, Windows, and varios IoT devices including the Raspberry Pi).

 

Trying to learn C# and .Net Core? This is a book I would recomend.