In-App Static Web Server with HttpListener in .Net

I was working on a Xamrin iOS application (using .Net) and one of the requirements was for the application to support a web view for presenting another form. The form would need to be served from within the application. There are lots of ways that one could accomplish this. For the requirements this only needed to be a static web server. The contents would be delivered via a zip file. Creating a static web server is pretty easy. I’ve created one before. Making this one would be easier.

What made this one so easy is that .Net provides the HttpListener class, which handles most of the socket/network related things for us. It will also parse out information from the incoming request and we can use it to generate a well formatted supply. It contains no logic for what replies should be sent for what circumstances, or for retrieving files from the file system, so on. That’s the part I had to build.

I was given an initial suggestion of getting the Zip file, using the .Net classes to decompress it and write it to the iPad’s file system, and retrieve the files from there. I started with that direction, but ended up with a different solution. Since the amount of data in the static website would be small, I thought it would be fine to leave it in the compressed archive. But if I changed my mind on this I wanted to be able to make adjustments with minimal effort.

Receiving Connections

To receive connections, the TcpListener class needs to know the prefix strings for requests. This prefix will usually contain http://localhost with a port number, such as http://localhost:8081/. It must end with the slash. Multiple prefixes can be specified. If you want the server to listen on all adapters for a specific port localhost could be replaced with * here. After creating a HttpListener these prefixes must be added to the listener’s Prefix collection.

String[] PrefixList
{
    get
    {
        return new string[] { "http://localhost:8081/",  "http://127.0.0.1:8081/", "http://192.168.1.242:8081/" };
    }
}

void ListenRoutine()
{
    _keepListening = true;
    listener = new HttpListener();
            
    foreach (var prefix in PrefixList)
    {
        listener.Prefixes.Add(prefix);
    }
            
    listener. Start();
    //...more code follows
}

The listener is ready to start listening for requests now. A call to TcpListener::GetContext() will block until a request comes in. Since it blocks, everything that I’m doing with the listener is on a secondary thread. I use the listener in a loop to keep replying to requests. The HttpListenerContext object contains an object representing the request (HttpListenerRequest) and the response (HttpListenerResponse). From the request, I am interested in the AbsolutePath of the request. This is the request URL Path with any query parameters removed. I’m also interested in the verb that was used on the request. For the server that I made I’m only handling GET requests.

while (_keepListening)
{
    //This call blocks until a request comes in
    HttpListenerContext context = listener.GetContext();
    HttpListenerRequest request = context.Request;
    HttpListenerResponse response = context. Response;


    ///Handle the request here

}
listener. Stop();

Let’s say that I wanted my server to return a hard coded response. I would need to know the size of that response in bytes. There is an OutputStream on the HttpListenerResponse object that I will write the entirety of my response to. Before I do, I set the ContentLength64 member of the HttpListenerResponse object.

async void HandleResponse(HttpListenerRequest request, HttpListenerResponse response)
{
    String responseString = "<html><body>Hello World</body></html>";
    byte[] responseBytes = System.Text.Encoding.UTF8.GetBytes(responseString);
    response.ContentLength64 = responseBytes.Length;
    var output = response.OutputStream;
    await output.WriteAsync(responseBytes, 0, responseBytes.Length);
    await output.FlushAsync();
    output. Close();
}

When I run the code now and navigate to the URL, I’ll see the text “Hello World” in the browser. But I want to be able to send more than just a hardcoded response. To make the server more useful it needs to send the property Mime Type header for certain content. I need to be able to easily change the content that it servers. To satisfy this goal I’ve externalized the data from the program and I’ve defined an interface to aid in adding new ways for the server to respond to the request. I’ll also want to be able to define other classes with different behaviours for requests. For those classes I’ve made the interface IRequestHandler. It defines two methods and two properties that the handlers must implement.

  • Prefix – this is a path prefix for the handler. It will only be considered as a class that can handle a response if the request’s absolute path starts with this prefix. If this field is an empty string then it can be considered for any request.
  • DefaultDocument – if no file name is specified in the path, then this is the document name that will be used.
  • CanHandleRequest(string method, string path) – This gives the class basic information on the request. If the class can handle the request it should return true from this method. If it returns false, it will no be given the request to process.
  • HandleRequest(HttpListenerRequest, HttpListenerResponse) – processes the actual request.

A list of these handlers will be made and added to a list. Each handler is considered for be given the request to handle one at a time until one is found that is appropriate for the request. When one is, it processes the request and no further handlers are considered. One of the handlers that I defined is the FileNotFoundHandler. It is the simplest of the request handlers. It can handle anything. Later, I’ll set this up as the last handler to be considered. If nothing else handles a request, thisn my FileNotFoundHandler will run.

public class FileNotFoundHandler : IRequestHandler
{
    public string Prefix => "/";

    public string DefaultDocument => "";

    public bool CanHandleRequest(string method, string path)
    {
        return true;
    }

    public async void HandleRequest(HttpListenerRequest request, HttpListenerResponse response)
    {
        String responseString = $"<html><body>Cannot find the file at the location [{request.Url.ToString()}]</body></html>";
        byte[] responseBytes = System.Text.Encoding.UTF8.GetBytes(responseString);
        response.StatusCode = 404;
        response.ContentLength64 = responseBytes.Length;
        var output = response.OutputStream;
        await output.WriteAsync(responseBytes, 0, responseBytes.Length);
        await output.FlushAsync();
        output. Close();
    }
}

Going back to the local server, I’m adding a list of IRequestHandler objects. The list will start with only the FileNotFoundHandler in it. Any other handlers added will be added at the front of the list, pushing everything back by one position. The last item added to the list will receive the highest priority.

List<IRequestHandler> _handlers = new List<IRequestHandler>();

public LocalServer(bool autoStart = false) {
    var fnf = new FileNotFoundHandler();
    AddHandler(fnf);
    if(autoStart)
    {
        Start();
    }
}

public void AddHandler(IRequestHandler handler)
{
    _handlers. Insert(0, handler);
}

void ListenRoutine()
{
    _keepListening = true;
    listener = new HttpListener();
            
    foreach (var prefix in PrefixList)
    {
        listener.Prefixes.Add(prefix);
    }
            
    listener. Start();
    while (_keepListening)
    {
        //This call blocks until a request comes in
        HttpListenerContext context = listener.GetContext();
        HttpListenerRequest request = context. Request;
        HttpListenerResponse response = context. Response;
        bool handled = false;
        foreach(var handler in _handlers)
        {
            if(handler.CanHandleRequest(request.HttpMethod, request.Url.AbsolutePath))
            {
                handler.HandleRequest(request, response);
                handled = true;
                break;
            }
        }
        if (!handled)
        {
            HandleResponse(request, response);
        }
    }
    listener. Stop();

}

This completes the functionality of the server itself, but I still need a handler. I mentioned earlier I wanted to serve content from a zip file. To do this I made a new handler named ZipRequestHandler. Some of the functionality that it will need will likely be part of almost any handler. I’ll put that functionality in a base class named RequestHandlerBase. This base class will define a DefaultDocument of index.html. It is also able to provide mime types based on a file extension. To retrieve mime types I have a string dictionary that maps an extension to a mimetype. Within the code I define some basic mime types. I don’t want all the mimetypes to be defined in source code. I have a JSON file that has a total of about 75 mime types in it. If that file were omitted for some reason the server would still have the foundational mime types provided here.

static StringDictionary ExtensionToMimeType = new StringDictionary();

static RequestHandlerBase()
{

            
    ExtensionToMimeType.Clear();
    ExtensionToMimeType.Add("js", "application/javascript");
    ExtensionToMimeType.Add("html", "text/html");
    ExtensionToMimeType.Add("htm", "text/html");
    ExtensionToMimeType.Add("png", "image/png");
    ExtensionToMimeType.Add("svg", "image/svg+xml");
    LoadMimeTypes();
}

        static void LoadMimeTypes()
        {
            try
            {
                var resourceStreamNameList = typeof(RequestHandlerBase).Assembly.GetManifestResourceNames();
                var nameList = new List<String>(resourceStreamNameList);
                var targetResource = nameList.Find(x => x.EndsWith(".mimetypes.json"));
                if (targetResource != null)
                {
                    DataContractJsonSerializer dcs = new DataContractJsonSerializer(typeof(LocalContentHttpServer.Handler.Data.MimeTypeInfo[]));
                    using (var resourceStream = typeof(RequestHandlerBase).Assembly.GetManifestResourceStream(targetResource))
                    {
                        var mtList = dcs.ReadObject(resourceStream) as MimeTypeInfo[];
                        foreach (var m in mtList)
                        {
                            ExtensionToMimeType[m.Extension.ToLower()] = m.MimeTypeString.ToLower();
                        }
                    }

                }
            } catch
            {

            }
        }

Getting a mime type is a simple dictionary entry lookup. We will see this used in the child class ZipRequestHandler.

public static string GetMimeTypeForExtension(string extension)
{
    extension= extension.ToLower();
    if(extension.Contains("."))
    {
        extension = extension.Substring( extension.LastIndexOf("."));
    }
    if(extension.StartsWith('.'))
        extension = extension.Substring(1);
    if(ExtensionToMimeType.ContainsKey(extension))
    {
        return ExtensionToMimeType[extension];
    }
    return null;
}

The ZipRequestHandler accepts either a path to an archive or a ZipArchive object along with a prefix for the requests. Optionally someone can set the caseSensitive parameter to disable the ZipRequestHandler‘s default behaviour of making request case sensitive. I’ve defined a decompress parameter too, but haven’t implemented it. When I do, this parameter will be used to decide if the ZipRequestHandler will completely decompress an archive before using it or keep the data compressed in the zip file. The two constructors are not substantially different. Let’s look at the one that accepts a string for the path to the zip file.

ZipArchive _zipArchive;
readonly bool _decompress ;
readonly bool _caseSensitive = true;
Dictionary<string, ZipArchiveEntry> _entryLookup = new Dictionary<string, ZipArchiveEntry>();

public ZipRequestHandler(String prefix, string pathToZipArchive, bool caseSensitive = true, bool decompress = false):base(prefix)
{
    FileStream fs = new FileStream(pathToZipArchive, FileMode.Open, FileAccess.Read);
    _zipArchive = new ZipArchive(fs);            
    this._decompress = decompress;
    this._caseSensitive = caseSensitive;
    foreach (var entry in _zipArchive.Entries)
    {
        var entryName = (_caseSensitive) ? entry.FullName : entry.FullName.ToLower();
        _entryLookup[entryName] = entry;
    }
}

public override bool CanHandleRequest(string method, string path)
{
    if (method != "GET") return false;
    return Contains(path);
}

Given the ZipArchive I collect the entries in the zip and their path. When request come in I’ll use this to jump straight to the relevant entry. The effect of the caseSensitive parameter can be seen here. If the class is intended to run case insensitive, then I convert file names to lower case. For later lookups, the search name specified will also be converted to lower case. Provided that a request is using the GET verb and requests a file that is contained within the archive this class will report that it can handle the request.

Ofcourse, the handling of the request is where the real work happens. A request may have query parameters appended to the end of it. We don’t want those for locating a file. Url.AbsolutePath will give the request path with the query parameters removed. If the URL path is for a folder, then we append the name of the default document to the path. we also remove any leading slashes so that the name matches the path within the ZipArchive. While I use TryGetValue on the dictionary to retrieve the ZipEntry, this should always succeed since there was an earlier check for the presence of the file through the CanHandleRequest call. We then get the mimeType for the file using the method RequestHandlerBase::GetMimeTypeForExtension. If a mimetype was found then the value for the header Content-Type is set.

The rest of the code looks similar to the code that was returning the hard coded responses. The ZipEntry abstracts away the details of getting a file out of a ZipArchive so nicely that it looks like reading from any other stream. The file is read and sent to the requester.

public override void HandleRequest(HttpListenerRequest request, HttpListenerResponse response)
{
    var path = request.Url.AbsolutePath;

    if (path.EndsWith("/"))
        path += DefaultDocument;
    if (path.StartsWith("/"))
        path = path.Substring(1);

    if (_entryLookup.TryGetValue(path, out var entry))
    {
        var mimeType = GetMimeTypeForExtension(path);
        if(mimeType != null)
        {
            response.AppendHeader("Content-Type", mimeType);
        }
        try
        {
            var size = entry.Length;
            byte[] buffer = new byte[size];
            var entryFile = entry.Open();
            entryFile.Read(buffer, 0, buffer.Length);

            var output = response.OutputStream;
            output.Write(buffer, 0, buffer.Length);
            output.Flush();
            output.Close();
        }catch(Exception exc)
        {

        }
    }
    else
    {
                
    }
}

The code in its present state meets most of the current needs. I won’t be sharing the final version of the code here. That will be in a private archive. But I can share a version that is functional. You can find the source code on GitHub at the following address.

https://github.com/j2inet/LocalStaticWeb.Net


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

ASP.NET Core: Pushing Data over a Live Connection

Today I am creating an application in ASP.NET Core 3.1.  The application needs to to continually push out updated information to any clients that are actively connected.  There are a few possible ways to do this.  I have decided to use PushStreamContent for this.

With this class I can have an open HTTP connection over which data can be arbitrarily pushed.  The stream itself is raw.  I could push out text, binary data, or anything else that I wish to serialize.  This means that any parsing and interpretation of data is my responsibility, but that is totally fine for this purpose.  But how do I use PushStreamContent to accomplish this?

I will start by creating a new ASP.NET Core project.

ASP.netCoreNewProject

For the ASP.NET Core Project type there are several paths to success.  The one that I am taking is not the only “correct” one.  I have chosen the template for “Web Application (Model-View-Controller)”.  A default web application is created and a basic HTML UI is put in place.

ASP.NETCoreProjectType

Now that the project is created, there are a few configuration items that we need to handle.  Startup.cs requires some updates since we are going to have WebAPI classes in this project.

ASP.NETCoreMVCProjectFiles

Within Startup.cs there is a method named ConfigureServices.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();
}

In this method an additional line is needed to support WebApiConventions.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();
    services.AddMvc().AddWebApiConventions();
}

For the sake of development the CORS headers for the application also need some updating.  Further down in Startup.cs is a method named Configure.

app.UseCors(builder =>  builder.AllowAnyHeader().AllowAnyOrigin().AllowAnyMethod());

If you are using Visual Studio as an editor you might see that AddApiWebConventions() has a red underline and an error about that method being undefined.  To resolve this there is a NuGet packet to add to the project.  Right-click on Dependencies, select “NuGet Package Manager.” Click on the “Browse” tab and search for a package named Microsoft.AspNetCore.Mvc.WebApiCompactShim.

Microsoft.aspNetCore.Mvc.WebApiCompactShim

After that is added the basic configuration is complete.  Now to add code.

Right-click on the Controllers folder and select “Add New”, “Controller.”  For the controller type select “API Controller – Empty.”  For the controller name I have selected “StreamUpdateController.”  There are three members that will need to be on the controller.

  • Get(HttpRequestMessage) – the way that a client will subscribe to the updates.  They come through an HTTP Get.
  • OnStreamAvailable(Strea, HttpContent, TransportContext) – this is a static method that is called when the response stream is ready to receive data.  Here it only needs to add a client to the collection of subscribers.
  • Clients – a collection of the currently attached clients subscribing to the message.

For the collection of clients an auto-generated property is sufficient.  A single line of code takes care of its implementation.

private static ConcurrentBag Clients { get; } = new ConcurrentBag();

The Get() method adds the new connection to the collection of connections and sets the response code and mime type.  The mime type used here is text/event-stream.

public HttpResponseMessage Get(HttpRequestMessage request)
{
    const String RESPONSE_TYPE = "text/event-stream";
    var response = new HttpResponseMessage(HttpStatusCode.Accepted)
    {
        Content = new PushStreamContent((a, b, c) =>
        { OnStreamAvailable(a, b, c); }, RESPONSE_TYPE)
    };
    response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue(RESPONSE_TYPE);
    return response;
}

The static method, OnStreamAvailable, only has a couple of lines in which it adds the stream to the collection with a StreamWriter wrapped around it.

private void OnStreamAvailable(Stream stream, HttpContent content,
    TransportContext context)
{
    var client = new StreamWriter(stream);
    clients.Add(client);
}

As of yet there is not anything being written to the streams.  To keep this sample simple all clients will receive the same data to their streams.  Everything is in place to start writing data out to clients.  I will use a static timer deciding when to write something to the stream.  Every time its period has elapsed all clients connected to the stream will receive a message.

        private static System.Timers.Timer writeTimer = new System.Timers.Timer() { Interval = 1000, AutoReset = true };

        static StreamUpdateController()
        {
            writeTimer.Start();
            writeTimer.Elapsed += TimerElapsed;
            writeTimer.Start();
        }

The handler for the timer’s elapsed event will get the current date and print it to all client streams.

static async void TimerElapsed(object sender, ElapsedEventArgs args)
{
    var dataToWrite = DateTime.Now.ToString();
    foreach(var clientStream in Clients)
    {
        await clientStream.WriteLineAsync(dataToWrite);
    }
}

You can now test the stream out.  If you run the project and navigate to https://localhost:yourPort/api/StreamUpdate you can see the data being written…

…well, kind of.  The browser window is blank.  Something clearly has gone wrong, because there is no data here.  Maybe there was some type of error and we are not getting an error message.  Or perhaps the problems was caused by… wait a second, there is information on the screen.  What took so long?

The “problem” here is that writes are buffered.  This normally is not a problem and can actually contribute to performance improvements.  But we need for that buffer to be transmitted when a complete message is within it.  To resolve this the stream can be requested to clear its buffer. Stream.Flush() and Stream.FlushAsync() will do this.  The updated TimerElapsed method now looks like the following.

static async void TimerElapsed(object sender, ElapsedEventArgs args)
{
    var dataToWrite = DateTime.Now.ToString();
    foreach(var clientStream in Clients)
    {
        await clientStream.WriteLineAsync(dataToWrite);
        await clientStream.FlushAsync();

    }
}

Run the program again and open the URL to now see the messages rendered to the screen every second.  But how does a client receive this?

While it is easy for .Net programs to interact with this, I wanted to consume the data from HTML/JavaScript.  Provided that you use a certain data format there is an object named EventSource that makes interacting with the output from this stream easy.  I have not used that format.  The option available to me is to use XmlHttpRequest.  If you have used XmlHttpRequest before, then you know that after a request is complete an event is raised making the completed data available to you.  That does not help for getting the chunks of available data.  The object has another event of help, onprogress.

When onprogress is fired the responseText member has the accumulated data.  The length field indicates how long the accumulated data is.  Every time the progress event is raised the new characters added to the string can be examined to grab the next chunk of data.

function xmlListen() {
    var xhr = new XMLHttpRequest();
    var last_index = 0;
    xhr.open("GET", "/api/Stream/", true);
    //xhr.setRequestHeader("Content-Type", "text/event-stream");
    xhr.onload = function (e) {
        console.log(`readystate ${xhr.readyState}`);
        if (xhr.readyState === 4) {
            if (xhr.status >=200 && xhr.status <= 299) {                                  console.log(xhr.responseText);             } else {                 console.error(xhr.statusText);             }         }     };     xhr.onprogress = (p) => {

        var curr_index = xhr.responseText.length;
        if (last_index == curr_index) return;
        var s = xhr.responseText.substring(last_index, curr_index);
        last_index = curr_index;
        console.log("PROGRESS:", s);
    }
    xhr.onerror = function (e) {
        console.error(xhr.statusText);
    };
    xhr.send(null);
}

This works.  But I do have some questions about this implementation.  The XmlHttpRequest‘s responseText object appears to be just growing without end.  For smaller data sizes this might not be a big deal, but since I work on some HTML applications that may run for over 24 hours it could lead to unnecessarily increased memory pressure.  That is not desirable.  Let us go back to EventSource

The data being written is not in the right format for EventSource.  To make the adjustment, the data must be proceeded by the string “data: ” and followed by two new line characters.  That is it.  A downside of conforming to something compatible with EventSource is that all data must be expressible as text.

static async void TimerElapsed(object sender, ElapsedEventArgs args)
{
    var dataToWrite = $"data: {DateTime.Now.ToString()}\n\n";
    foreach(var clientStream in Clients)
    {
        await clientStream.WriteLineAsync(dataToWrite);
        await clientStream.FlushAsync();
    }
}

On the client side the following will create an EventSource that receives those messages.

function startEventSource() {
    var source = new EventSource('/api/Push/');
    source.onmessage = (message) => {
        console.log(message.id, message.data);
    }
    source.onerror = (e) => {
        console.error(e);
    }
    source.onopen = () => {
        console.log('opened');
    }
}

I would still like to be able to stream data less constrained through the same service.  To do this instead of having a collection of StreamWriter objects I have made a class to hold the streams along with another attribute that indicates the format in which data should be.  A client can specify a format through a query parameter.

enum StreamFormat
{
    Text,
    Binary
}
class Client
{
    public Client(Stream s, StreamFormat f)
    {
        this.Stream = s;
        this.Writer = new StreamWriter(s);
        this.Format = f;
    }

    public Stream Stream { get;  }
    public StreamWriter Writer { get; }
    public StreamFormat Format { get; }

}

public HttpResponseMessage Get(HttpRequestMessage request)
{
    var format = request.RequestUri.ParseQueryString()["format"] ?? "text";
    const String RESPONSE_TYPE = "text/event-stream";
    HttpResponseMessage response;
    if (format.Equals("binary"))
    {
        response = new HttpResponseMessage(HttpStatusCode.Accepted)
        {
            Content = new PushStreamContent((a, b, c) =>
            { OnBinaryStreamAvailable(a, b, c); }, RESPONSE_TYPE)
        };
    }
    else
    {
        response = new HttpResponseMessage(HttpStatusCode.Accepted)
        {
            Content = new PushStreamContent((a, b, c) =>
            { OnStreamAvailable(a, b, c); }, RESPONSE_TYPE)
        };
    }           
    return response;
}

static void OnStreamAvailable(Stream stream, HttpContent content, TransportContext context)
{                        
    Clients.Add(new Client(stream, StreamFormat.Text));
}

static void OnBinaryStreamAvailable(Stream stream, HttpContent content, TransportContext context)
{
    Clients.Add(new Client(stream, StreamFormat.Binary));
}

        static async void TimerElapsed(object sender, ElapsedEventArgs args)
        {
            var data = new byte[] { 0x01, 0x02, 0x03, 0x04, 0x10, 0x40 };

            List unsubscribeList = new List();
            foreach(var client in Clients)
            {
                try
                {
                    if (client.Format == StreamFormat.Binary)
                        await client.Stream.WriteAsync(data, 0, data.Length);
                    else 
                        await client.Writer.WriteLineAsync($"data: {ByteArrayToString(data)}\n\n");
                    await client.Writer.FlushAsync();
                } catch(Exception exc)
                {
                    unsubscribeList.Add(client);
                }
            }
            Clients.RemoveAll((i) => unsubscribeList.Contains(i));
        }
twitterLogofacebookLogoyoutubeLogoInstagram Logo

Linked In

 

 

 


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.