Using an iPhone as a Beacon

As with any project, I had a list of milestones and dates on which I expected to hit them leading up to project completion. One of the elements for the project was an application that needed to detect its proximity to other devices to select a device for interaction. I planned to use iBeacons for this, and had a delivery date on some beacons for development. The delivery date came, a box with the matching tracking number came, but there were no iBeacons inside. Instead, there was a phone case. This isn’t the first time I have ordered one item and Amazon has sent me another. I went online and filled out a form to have the order corrected. They stated I would have the item in another 5 days. In the mean time, I didn’t want to let progress slip. I’ve heard several times “You can use an iPhone as an iBeacon.” I now had motivation to look into this. You can in fact use a phone as an iBeacon. But you have to write an application yourself to use it this way.

When I took a quick look in the App store, I couldn’t find an app for this purpose. So I decided to make an application myself. It isn’t hard. In my case, I’m emulating an iBeacon as a stand-in for actual hardware. But there are other reasons you might want to do this. For example, if I were using an iPad as a display showing more information on an exhibit users browsing the exhibit could interact with the content on the display using their own phone. The iBeacon signal could be used so that the user’s phone knows which display it is close to, allowing them to trigger interactions from their own phone (a valuable method of interaction given the higher concerns on hygiene and concerns over shared touch surfaces).

Beacons are uniquely identified by three pieces of data; the UUID, Major number, and Minor number. A UUID, or Universally Unique ID, is usually shared among a group of iBeacons that are associated with the same entity. The usage of the Major and Minor numbers is up to the entity. Usually the Major will be used to group related iBeacons together with the Minor number being used as a unique ID within the the set. I’ll talk more about these numbers in another post.

For my iPhone application, I have created a few variables to hold the simulated Beacon’s identifiers. I also have a variable to track whether the iBeacon is active, and have defined a Zero UUID to represent a UUID that has not been assigned a value.

class BeaconManager {
    
    var objectWillChange = PassthroughSubject<Void, Never>()
    
    let ZeroUUID = UUID.init(uuidString: "00000000-0000-0000-0000-000000000000")
    
    var BeaconUUID = UUID(uuidString: "00000000-0000-0000-0000-000000000000") {
        didSet { updateUI() }
    }
    
    var Major:UInt16 = 100 {
        didSet { updateUI() }
    }
    
    var Minor:UInt16 = 2 {
        didSet { updateUI() }
    }
    
    var IsActive:Bool = false {
        didSet { updateUI() }
    }
}

I am going to use Swift UI for displaying information. That is why setting these variables also triggers a call to updateUI(). There are some callbacks that are made by Apple’s iBeacon API. For these, I’ll need to also need to implement the CBPeripheralManagerDelegate. This protocol is defined in CoreBluetooth. We also need permission for the device to advertise its presence over Bluetooth. permission. Bluetooth is often used for indoor location (which will be my ultimate intention). Let’s get all these other things in place. The necessary import statements and inheritance will look like the following.

import Foundation
import CoreLocation
import CoreBluetooth
import Combine

class BeaconManager: NSObject, CBPeripheralManagerDelegate, Identifiable, ObservableObject {
   ...
}

For the Bluetooth permission that the application needs, a new String value must be added to the Info.plist. The item’s key is NSBluetoothAlwaysUsageDescription. The value should be a text description that will be presented to the user letting them know why the application is requesting Bluetooth permissions.

I want the simulated iBeacon to have the same value every time the application runs. At runtime, the application is going to check whether there is a UUID already saved in the settings. If there is not one, then it will generate a new UUID and save it to the settings. From then on, it will always use the same ID. I do the same thing with the Major and Minor numbers using the UInt16.random(in:) function. This information together is used for create a CLBeaconRegion.

    func createBeaconRegion() -> CLBeaconRegion {
        let settings = UserDefaults.standard
        if let savedUUID = settings.string(forKey: BEACON_UUID_KEY) {
            if let tempBeaconUUID = UUID(uuidString: savedUUID) {
                BeaconUUID = tempBeaconUUID
            }
        }
        if(BeaconUUID == nil){
            BeaconUUID = UUID()
            settings.setValue(BeaconUUID!.uuidString, forKey: BEACON_UUID_KEY)
            settings.synchronize()
        }   
        let majorValue = settings.integer(forKey: BEACON_MAJOR_KEY) ?? 0
        if(majorValue == 0) {
            Major = UInt16.random(in: 1...65535)
            settings.setValue(Major, forKey: BEACON_MAJOR_KEY)
        }   
        let minorValue = settings.integer(forKey: BEACON_MINOR_KEY) ?? 0
        if(minorValue == 0) {
            Minor = UInt16.random(in: 1...65535)
            settings.setValue(Minor, forKey: BEACON_MINOR_KEY)
        }   
        print(BeaconUUID?.uuidString)
        let major:CLBeaconMajorValue = Major
        let minor:CLBeaconMinorValue = Minor
        let beaconID =  "net.domain.application"
        return CLBeaconRegion(proximityUUID:BeaconUUID!, major: major, minor: minor, identifier: beaconID)
    }

When I first tried to use the CLBeaconRegion it failed, and I was confused. After a bit more reading, I found out why. The Bluetooth radio can take a moment to initialize into the mode that the code needs it for. Trying to use it too soon can result in failure. To fix this, wait for a callback to CBPeripheralManagerDelegate::peripheralManagerDidUpdateState(_ peripheral:CBPeripheralManager).In the handler for this callback, check if the .state of the peripheral variable .poweredOn. If it is, then we can start using our CLBeaconRegion. We can call startAdvertising on the CBPeripheralManager object to make the iBeacon visible. When we want the phone to no longer act as an iBeacon, we can call the stopAdvertising. Note that the device will only continue to transmit while the application has focus. If the application gets pushed to the background, the phone sill stop presenting as an iBeacon.

    func peripheralManagerDidUpdateState(_ peripheral: CBPeripheralManager) {
        if(peripheral.state == .poweredOn) {
            let  beaconRegion = createBeaconRegion()
            let peripheralData = beaconRegion.peripheralData(withMeasuredPower: nil)
            peripheral.startAdvertising(((peripheralData as NSDictionary)as! [String:Any]))
            IsActive = true
        }
    }

    func start() {
        if(!IsActive) {
            peripheral = CBPeripheralManager(delegate:self, queue:nil)
        }
    }
    
    func stop() {
        if(IsActive) {
            if (peripheral != nil){
                peripheral!.stopAdvertising()
            }
            IsActive = false
        }
    }

The code for the class I used for simulating the iBeacon follows. For the simplest use case, just instantiate the class and call the start() method. Provided the Info.plist has been populated with a value for NSBluetoothAlwaysUsageDescription and the user has granted permission, it should just work. In the next post, lets look at how to detect iBeacons with an iOS application. The next application isn’t limited to only detecting iPhones acting as iBeacons. It will work with real iBeacons too. As of now I have gotten my hands on a physical iBeacon compatible transmitter. While any iBeacon transmitter should work, if you would like to follow along with the same iBeacon that I am using, you can purchase the following from Amazon (affiliate link).

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

High Resolution Video Capture on the Raspberry Pi

I’ve previously talked about video capture on the Raspberry Pi using an HDMI device that interfaced with the camera connector. Today I’m looking at using USB capture devices. USB capture devices often present as web cams. The usual software and techniques that you may use with a webcam should generally work here.

The two devices that I have are the Elgato CamLink 4K and the Atmos Connect. Despite the name, the CamLink 4K does not presently work in 4K mode for me on the Pi. (I’m not sure that the pi would be able to handle that even if it did). I am using a Raspberry Pi 4. I got better results with the Atmos Connect; it is able to send the Pi pre-compressed video so that the pi didn’t have to compress it.

Hardware setup is simple. I connected the USB capture device to the pi and connected an HDMI source to the capture device. If you want to be able to monitor the video while it is captured you will also need an HDMI splitter; the Pi does not show video while it is being captured. Most of what needs to be done will be in the Raspberry Pi terminal.

If you want to ensure that your capture device was detected you can use the lsusb command. This command lists all the hardware detected on the USB port. If you can’t recognize a device there, using it, disconnecting a device, using it again, and noting the difference will let you know identify a line in the output to an item of hardware. Trying first with the Elgato CamLink, my device was easily identified. There was an item labeled as Elgato Systems GmbH.

I’ve not been able to make the devices work with raspistill or raspivid, but it works with ffmpeg and video for linux (v4l-utils). To install Video 4 Linux, use the following command.

sudo apt install v4l-utils

Once Video 4 Linux is installed, you can list the devices that it detects and the device file names.

v4l2-ctl --list-devices

In addition to hardware encoders, I get the following in my output.

Cam Link 4K: Cam Link 4K (usb-0000:01:00.0-2):
          /dev/video0
          /dev/video1

The device name of interest is the first, /dev/video0. Using that file name, we can check what resolutions that it supports with ffmpeg.

$ ffmpeg -f v4l2 -list_formats all -i /dev/video0

[video4linux2,v4l2 @ 0xcca1c0] Raw     :    yuyv422:      YUYV 4:2:2 : 1920x1080
[video4linux2,v4l2 @ 0xcca1c0] Raw     :       nv12:      YUYV 4:2:2 : 1920x1080
[video4linux2,v4l2 @ 0xcca1c0] Raw     :   yuyv420p:      YUYV 4:2:0 : 1920x1080

For this device, the only resolution supported in 1920×1080. I have three options for my format, yuyv422m nv12, and yuyv420p. Before we start recording, let’s view the input. With the following command, we can have ffmpeg read from our video capture device and display the HDMI video stream on the screen.

ffplay -f v4l2 -input_format nv12 -video_size 1920x1080 -framerate 30 -i /dev/video0

In your case, the format (nv12), resolution, and hardware file name may need to be different. If all goes well after running that command, you will see the video stream. Let’s have the video dump to a file now.

ffmpeg -f v4l2 -thread_queue_size 1024 -input_format nv12 -video_size 1920x1080 -framerate 30 -i /dev/video0 -f pulse -thread_queue_size 1024 -i default -codec copy output.avi

This command will send the recorded data to an AVI file. AVI is one of the envelope formats in which audio, video, and other data can be packaged together. You will probably want to convert this to a more portable format. We can also use ffmpeg to convert the output file from AVI to MP4. I’m going to use H264 for video encoding and AAC for audio encoding.

ffmpeg -i output.avi  -vcodec libx264 -acodec aac -b:v 2000k -pix_fmt yuv420p output.mp4

You can find an audio entry on this blog post on Spotify!

Those of you that follow me on Instagram, you may have seen a picture of the equipment that I use to record the video walkthrough of how to do this. A list of those items used is below. Note that these are Amazon Affiliate links.

BlackMagic Designs ATEM
Mini
Mini Pro
Mini Pro Iso

Visual Studio 2022 Release Date

Visual Studio 2022 has been available in test form for a while. I’ve got the release candidate running now. But there is now a release date available for it. If you are looking to upgrade to VS 2022, your wait will be over on November 8, 2021. On this day Microsoft is holding a launch event for Visual Studio 2022. This isn’t just a software release, but also will have demonstrations on what Visual Studio 2022 is bringing.

Visual Studio 2022 brings greater support/integration with GitHub (Microsoft agreed to purchase GitHub back in 2018), code editor, and debugging improvements. The range of new feature touch the areas of WPF, WinForms, WinUI, ASP.NET, and areas not traditionally thought of as Windows specific, such as cross-platform game technologies, developing applications for Mac, and apps for Linux.

The fun starts on November 8, 8:30AM PDT. Learn more here.

#VS2022


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Simple HTTP Server in .Net

.Net and .Net Core both already provide fully functional HTTPS servers, as does IIS on Windows. But I found the need to make my own HTTP server in .Net for a plugin that I was making for a game (Kerbal Space Program, specifically). For my scenario, I was trying to limit the assemblies that I needed to add to the game to as few as possible. So I decided to build my own instead of adding references to the assemblies that had the functionality that I wanted.

For the class that will be my HTTP server, there are only two member variables needed. One to hold a reference to a thread that will accept my request and another for the TcpListener on which incoming requests will come.

Thread _serverThread = null;
TcpListener _listener;

I need to be able to start and top the server at will. For now, when the server stops all that I want it to do is terminate the thread and release any network resources that it had. In the Start function, I want to create the listener and start the thread for receiving the requests. I could have the server only listen on the loopback adapter (localhost) by using the IP address 127.0.0.1 (IPV4) or :1 (IPV6). This would generally be preferred unless there is a reason for external machines to access the service. I’ll need for this to be accessible from another device. Here I will use the IP address 0.0.0.0 (IPV4) or :0 (IPV6) . I’ll be using

public void Start(int port = 8888)
{
    if (_serverThread == null)
    {
        IPAddress ipAddress = new IPAddress(0);
        _listener = new TcpListener(ipAddress, 8888);
        _serverThread = new Thread(ServerHandler);
        _serverThread.Start();
    }
}

public void Stop()
{
    if(_serverThread != null)
    {
        _serverThread.Abort();
        _serverThread = null;
    } 
}

The TcpListener has been created, but it isn’t doing anything yet. The call to have it listen for a request is a blocking request. The TcpListener will start listening on a different thread. When a request comes in, we read the request that was sent and then send a response. I’ll read the entire response and store it in a string. But I’m not doing anything with the request just yet. For the sake of getting to something that functions quickly, I’m going to hardcode a response

String ReadRequest(NetworkStream stream)
{
    MemoryStream contents = new MemoryStream();
    var buffer = new byte[2048];
    do
    {
        var size = stream.Read(buffer, 0, buffer.Length);
        if(size == 0)
        {
            return null;
        }
        contents.Write(buffer, 0, size);
    } while (stream.DataAvailable);
    var retVal = Encoding.UTF8.GetString(contents.ToArray());
    return retVal;
}

void ServerHandler(Object o)
{
    _listener.Start();
    while(true)
    {
        TcpClient client = _listener.AcceptTcpClient();
        NetworkStream stream = client.GetStream();

        try
        {
            var request = ReadRequest(stream);

            var responseBuilder = new StringBuilder();
            responseBuilder.AppendLine("HTTP/1.1 200 OK");
            responseBuilder.AppendLine("Content-Type: text/html");
            responseBuilder.AppendLine();
            responseBuilder.AppendLine("<html><head><title>Test</title></head><body>It worked!</body></html>");
            responseBuilder.AppendLine("");
            var responseString = responseBuilder.ToString();
            var responseBytes = Encoding.UTF8.GetBytes(responseString);

            stream.Write(responseBytes, 0, responseBytes.Length);

        }
        finally
        {
            stream.Close();
            client.Close();
        }
    }
}

To test the server, I made a .Net console program that instantiates the server.

namespace TestConsole
{
    class Program
    {
        static void Main(string[] args)
        {

            var x = new HTTPKServer();
            x.Start();
            Console.ReadLine();
            x.Stop();
        }
    }
}

I rant the program and opened a browser to http://localhost:8888. The web page shows the response “It worked!”. Now to make it a bit more flexible. The logic for what to do with a response will be handled elsewhere. I don’t want it to be part of the logic for the server itself. I’m adding a delegate to my server. The delegate function will receive the request string and must return the response bytes that should be sent. I’ll also need to know the Mime type. I’ve made a class for holding that information.

public class Response
{
    public byte[] Data { get; set; }
    public String MimeType { get; set; } = "text/plain";
}

public delegate Response ProcessRequestDelegate(String request);
public ProcessRequestDelegate ProcessRequest;

I’m leaving the hard coded response in place, though I am changing the message to say that no request processor has been added. Generally, the code is expecting that the caller has registered a delegate to perform request processing. If it has not, then this will server as a message to the developer.

The updated method looks like the following.

void ServerHandler(Object o)
{
    _listener.Start();
    while(true)
    {
        TcpClient client = _listener.AcceptTcpClient();
        NetworkStream stream = client.GetStream();

        try
        {
            var request = ReadRequest(stream);

            if (ProcessRequest != null)
            {
                var response = ProcessRequest(request);
                var responseBuilder = new StringBuilder();
                responseBuilder.AppendLine("HTTP/1.1 200 OK");      
                responseBuilder.AppendLine("Content-Type: application/json");
                responseBuilder.AppendLine($"Content-Length: {response.Data.Length}");
                responseBuilder.AppendLine();

                var headerBytes = Encoding.UTF8.GetBytes(responseBuilder.ToString());

                stream.Write(headerBytes, 0, headerBytes.Length);
                stream.Write(response.Data, 0, response.Data.Length);
            }
            else
            {
                var responseBuilder = new StringBuilder();
                responseBuilder.AppendLine("HTTP/1.1 200 OK");
                responseBuilder.AppendLine("Content-Type: text/html");
                responseBuilder.AppendLine();
                responseBuilder.AppendLine("<html><head><title>Test</title></head><body>No Request Processor added</body></html>");
                responseBuilder.AppendLine("");
                var responseString = responseBuilder.ToString();
                var responseBytes = Encoding.UTF8.GetBytes(responseString);

                stream.Write(responseBytes, 0, responseBytes.Length);
            }
        }
        finally
        {
            stream.Close();
            client.Close();
        }
    }
}

The test program now registers a delegate. The delegate will show the request and send a response that is derived by the time. I’m marking the response as a JSON response.

static Response ProcessMessage(String request)
{
    Console.Out.WriteLine($"Request:{request}");
    var response = new HTTPKServer.Response();
    response.MimeType = "application/json";
    var responseText = "{\"now\":" + (DateTime.Now).Ticks + "}";
    var responseData = Encoding.UTF8.GetBytes(responseText);
    response.Data = responseData;
    return response;

}

static void Main(string[] args)
{

    var x = new HTTPServer();
    x.ProcessRequest = ProcessMessage;
    x.Start();
    Console.ReadLine();
    x.Stop();
}
}

I grabbed my iPhone and made a request to the server. From typing the URL in there are actually two requests; there is a request for the URL that I typed for for an icon for the site.

Request:
Request:GET /HeyYall HTTP/1.1
Host: 192.168.50.79:8888
Upgrade-Insecure-Requests: 1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/91.0.4472.80 Mobile/15E148 Safari/604.1
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: keep-alive


Request:GET /favicon.ico HTTP/1.1
Host: 192.168.50.79:8888
Connection: keep-alive
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/91.0.4472.80 Mobile/15E148 Safari/604.1
Accept-Encoding: gzip, deflate
Accept-Language: en,en-US;q=0.9,ja-JP;q=0.8

To make sure this work, I loaded it into my game and made a request. The request was successful and I am ready to move on to implementing the logic that is needed for the game.

Why Your Computer Might not Run Windows 11

As of October 5, 2021, the RTM version of Windows 11 is available for download. I’ve downloaded it and have tried to install it on a range of machines. In doing this, the first thing that stands out is that there are a lot of otherwise capable machines in existence today that may not be able to run the new operating system. There are three requirements that tended to be the obstacles on the computers on which I did not install Windows 11.

  • Secure Boot Enabled
  • TPM 2.0
  • Unsupported Processor

Because of the Secure Boot and the TPM requirements, I found that I could not install Windows 11 on my Macs using Bootcamp. Other guides that I’ve found all have Windows 11 on Mac being installed within a virtual machine. The unsupported processor issue was not expected. One of the computers on which the installation failed to install has a Xeon 3.0 GHz processor with 16 cores and an RTX 3090 video card. This computer has 64 gigs of ram, 2 terabytes of M.2 storage, and a few terabytes on conventional drives. But its processor is not supported. If your computer matches the Windows 11 requirements on paper, that doesn’t give assurance that it is actually compatible. If you want to test for yourself, the best method is to run the Windows 11 upgrade advisor.

Microsoft is using virtualization-based security (VBS) to protect processes in Windows 11. There is a sub-feature of this called Hypervisor-Protected Code Integrity (HVCI) that prevents code injection attacks. Microsoft has said that computer’s with processors that support this feature have a 99.8% crash free experience (source). For purposes of reliability and security, Microsoft has decided that this feature will be part of the baseline for Windows 11. Back in August, Microsoft made a blog post stating that they found some older processors that met their requirements and would be adding them to their compatibility list. There’s a chance that a computer that shows as no compatible today could show as compatible later.

A Windows feature that I’ve enjoyed using is Windows 2 GO. With W2G, a Windows environment is installed on a USB drive specifically made for this feature (it would not quite work on a regular drive). The W2G drive could be plugged into any PC or Intel based Mac as a boot drive. It was a great way to port an environment around to use as an emergency drive. Microsoft discontinued support of the feature some time ago. But it still worked. With Windows 11, this feature is effectively dead.

You can find both the upgrade advisor and the Windows 11 download at the following link.

https://www.microsoft.com/en-us/software-download/windows11

Windows 11 offers a a lot of consumer focused features and a new look. But my interest is in the new APIs that the OS provides. Microsoft has extended the DirectX APIs, especially in Composition and DirectDisplay. The Bluetooth APIs have extended support for low energy devices. And there is now support for haptic pen devices and more phone control.

I was already on the market for a new laptop. I’ll be getting another computer that runs Windows 11 soon enough. But in the mean time, I’ve moved one of my successful installs to my primary work area so that I can try it out as my daily driver. More to come…


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Conferences and a Hearings, Sept-Oct 2021

During the month of October, there are a couple of developer conferences happening. Samsung is resuming what had been their regular Developers conference (there wasn’t one in 2020, for obvious reasons). Like so many other conferences, this one is going to be online on 26 October. Details of what will be in it haven’t been shared yet, but I noticed a few things from the iconography of their promotional video.

he Tizen logo is present, specifically on a representation of a TV. It looks that Samsung has abandoned the Tizen OS for anything else. They generally don’t make an announcement that they are sunsetting a technology and instead opt to quietly let it disappear. A few months ago Google made the ambiguous announcement that Samsung and Google were combining their wearable operating systems into a single platform while not directly saying that Tizen was going away. Just before the release of the Gear 4 watch (which runs Android Wear, not Tizen) Samsung made an announcement that they were still supporting Tizen. But with no new products on the horizon and the reduction in support in the store, this looks more like a phased product sunset.

Some of the other products suggested by the imagery include wearables, Smart Things (home automation), Bixby (voice assistant) and Samsung Health.

October 12-14, Google is hosting their Cloud Next conference. Registration for this conference is open now, and available at no cost. Google has made the session catalog available. The session categories include AI/Machine Learning, Application Development, Security, and more.

Sessions available at https://cloud.withgoogle.com/next

And last, if you have an interest in the USA’s developing responses to technology issues, this Thursday the Senate Committee on Commerce, Science, and Transportation is holding a hearing with Facebook’s head of safety over some recent reports published by the Wall Street Journal about the impact of it’s apps on younger audiences. The hearing (with live stream) will be Thursday, September 30, 2021 at 10:30am EDT. The livestream will be available at www.commerce.senate.gov.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Sharing Resources With Your Chromebook Linux Container

If you have installed Linux on your Chromebook, you may have noticed that the file system as viewed from the files application on your Chromebook and the file system in the Linux terminal do not look alike. This is because Linux is running within a container. There are two ways to share files between your Chromebook and the Linux container.

If you open the Files application on your Chromebook, you will see an area called Linux Files. This lets you get access to files in your Linux home directory and read them from Linux doesn’t have immediate access to the files on the Chromebook. To access these, you need to explicitly share the folder with Linux. From the Files application find the folder that you want to share. Right-click on it and select “Share with Linux.” From within Linux if you navigate to the path /mnt/chromeos you will see sub-folders that are mount-points for each of the folders from Chrome that you’ve shared.

You can also share USB drives with Linux. By default, they are not available. If you open Settings and look for “Manage USB Devices” the USB drives that are connected to your machine will be listed. You can select a drive to share with Linux from there. Note that when you disconnect the drive, the next time that it is reconnected it will not automatically be shared.

The Linux container’s ports are also not exposed to your network by default. For the ports to be visible to other devices on your network, you must explicitly share them. Under settings if you look for “Port Forwarding” you will be taken to an interface where you can specify the ports that will be exposed. Note that you can only add ports in the range of 1024 to 65,535.

Hosting a Website from Home on the Pi

Under many cases, one can easily host a website from home. There are a few technical requirements that must be satisfied. But provided that they can be satisfied, making a site available on the Internet from one’s home connection isn’t hard. That said, there are some reasons one might not want to do this, such as exposing their home IP Address, or any security flaws in their at-home web server making other devices on their network vulnerable if the web server has a security flaw. There are a few ways that this can be mitigated, including ensuring that the latest security updates are installed and not exposing unnecessary services to the Internet.

Before getting started, you will want to know whether your IP address is public or behind a network address translator. This is like the difference in having your own phone number or being in a phone pool in which all of the phones are identified by the same phone number. The easiest way to figure this out is to perform a Google Search on “What is my IP address” and compare that address to the one that is reported by your router. If those addresses are the same, congrats! You’ve got a public address. If they are not the same, then your network is behind a shared IP address. Solutions for hosting in this scenario involve having traffic routed from another computer that has a public address to one’s home address. That’s out of scope of what I am trying to do here.

If you have a public IP address, you will need access to your router settings. By default, routers will not direct incoming requests to devices on your network unless they are configured to do so. The exact interface used for applying these settings are manufacturer dependent. I’ll be showing the process on an ASUS router. If you are using a different router (or even another ASUS router) the interface will look different, but the concept is the same.

For a web server, I’m going to use a Raspberry Pi. The Pi doesn’t use a lot of energy or space. I can leave it on 24-7 in a place that is out of the way. Many of the same solutions for making web applications on the desktop also run on the Raspberry Pi. I’ll be using .Net Core. But someone could also use Node or a number of other solutions. I already have the .Net Core framework and SDK installed on my Raspberry Pi. I’ll just create a default website, as I’m not concerned with the content in it for not. To do this, I made a new folder that will contain the web site and used the following command.

dotnet new web

A few moments later, the Raspberry Pi has a “Hello World!” website build. To run it, use the following command.

dotnet run

The console will output the URL to the site. It will look something like http://localhost:5000. If you open a browser on the Pi to this URL you should see the text “Hello World” render. If you use the Pi’s IP address and try this from another computer on the network, it is going to fail. But why?

By default, the website is only binding to the loopback IP address (127.0.0.1 or ::1). The site is only visible from within the computer. To change this, we could either have the site bind to a specific IP address (the Pi could have several IP addresses) or we could tell the site to bind to all network adapters on the computer. For .Net Core, we can change the address and port that the site binds to by editing Properties/launchSettings.json. Close to the bottom of the file is a setting named applicationUrl. It has a list of IP address/port combinations separated by semicolons (;). Add to this list the value http://*:5000 (feel free to use a different port number).

    "staticwebsite": {
      "commandName": "Project",
      "dotnetRunMessages": "true",
      "launchBrowser": true,
      "applicationUrl": "https://localhost:5001;http://localhost:5000;http://*:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
     }

Now if you return to the other computer and navigate, in your browser, to http://<ip address>:5000 you will see the text “Hello World. The site is not visible to other computers on your network. But what about other computers on the Internet? This is where you will need to look at your router settings.

Your home router’s ip address will usually be the same as what the computer reports as its gateway address. Depending on your operating system, you can see your computer’s network settings by opening a terminal and using the command ipconfig (Windows) or by using the command ip r (linux). Example output of both commands follow.

PS C:\Users\SomeUser> ipconfig

Windows IP Configuration


Ethernet adapter Ethernet:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . :

Ethernet adapter Ethernet 2:

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::e055:c0d4:65ac:49d1%3
   IPv4 Address. . . . . . . . . . . : 192.168.50.79
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.50.1
pi@raspberrypi:~ $ ip r
default via 192.168.50.1 dev wlan0 proto dhcp src 192.168.50.50 metric 304
192.168.50.0/24 dev wlan0 proto dhcp scope link src 192.168.50.50 metric 304

Type this IP address into your browser. You will need to know the user ID and password to your router, as it will likely ask you for them. Within your router, you will want to find the port forwarding settings. On my Asus Router, I can get to this page by selecting “Wan” and then selecting the “Virtual Server/Port Forwarding” tab. My router gives me a warning that ports 80 (usually used for HTTP) and ports 20 and 21 (used for FTP) are already used by the router for some services that it provides. I don’t actually have these services turned on, thus I wouldn’t run into conflicts. Nevertheless, I will use a different port.

Asus Router Port Forwarding Page

Let’s say that I want the world to access my site through port 8081. You may recall that my Pi was hosting its site on port 5000. It is not necessary for the ports to be the same. In my interface there is a switch to enable port forwarding.Clicking on “Add Profile” will create a new entry. For this entry, I need to specify this forwarding is for TCP traffic coming in on external port 80. That traffic should be forwarded to port 5000 on the address of my Raspberry Pi, which is 192.168.50.50.

Port Forwarding Settings

After saving these settings, I turned off WiFi on my phone (to ensure it was not on the same network) and typed my home network’s IP address into my browser. I see the “Hello World” page. Great! But there is an issue here. I’m going to find it hard to remember my IP address. Also, my IP address changes every so often. Even if I keep track of it, it could change without notice. The solution to this is to use a Dynamic DNS. The router I’m using comes with this service built in. There are also options for networks where the router does not have this option.

On this Asus router, the option is under the Wan settings under a tab named DDNS. To enable the feature, I can set the option Enable the DDNS Client to Yes and select which DDNS service I want to use. If I use the default, the only setting I need to enter is the name that will be used as a portion of the domain.

Supported DDNS services for the Asus Router

For the other services, you will require additional information. I’ve used NO-IP before. I’ll use that one. Before using this service, you will want to visit no-ip.com and create a free account. After signing in, you can set your DDNS host names. You can have up to three domains setup on the free account. When setting this up, note that the IPv4 address defaults to that of the device from which this hostname is created.

DDNS Setup Screen

The actual domain used will be the concatenation of what is entered in the Hostname setting, followed by a period, followed by the domain selected from the drop down. Once you’ve saved this information, return to your router DDNS settings. Enter the complete host name, your NO-IP username, and password. After selecting “Apply” you should see a notification that the settings were successfully applied. Now, when you, or anyone else enter that domain name, your site will come up.

If your router doesn’t support a DDNS client, then you can build a DDNS updater on your Raspberry Pi. On your Pi, make a folder for the Dynamic Update Clienc (DUC). Enter that folder from a terminal and download and unpackage and build the DUC with the following commands.

wget https://www.noip.com/client/linux/noip-duc-linux.tar.gz
tar vzxf noip-duc-linux.tar.gz
cd noip-2.1.9-1/
sudo make
sudo make install

The last command, sudo make install, will ask for your NO-IP account information. You will also be asked for an update interval (in minutes). Accept the default of 30 minutes. From hereon, the Raspberry Pi will update the DDNS.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Raspberry Pi Heat Sink

Pi Touch Screen with Case

Installing .Net Core and Visual Studio Code on Chromebook

Having a portable light-weight development environment is important to me. I’m always on the lookout for a solution. There have been a number of solutions I’ve found for which I’ve been optimistic, but over a longer period of time they just haven’t worked out. Samsung had Linux on Dex, which allowed certain phones to run a full Ubunto Linux environment. But they later discontinued it and removed support for it. Microsoft Windows2Go, which allowed a full Windows environment to be installed on a portable USB drive and moved from one computer to another. I still have a few of these drives and use them, but Windows has dropped support for them and replacement drives are hard to find. I’ve managed to use a Raspberry Pi as a portable development environment before too. But now I have what looks to be an improved option. Many Chromebooks support a Linux environment.

I’m using a Samsung Chromebook Plus. While the unit that I have is older, it has seen some significant changes throughout its life. When I first received it, to edit code I had to use a code editor from the Chrome store. This was nowhere near the best editor that I have used, but it worked. The Chromebook later gained the ability to run Android applications. In the Google Play Store there were some code editors, but once again, not the best. My Chromebook has had a linux environment on it for a while, and I’ve recently installed the Visual Studio Code. Finally! A **real** code editor!

Before installing .Net Core or Visual Studio Code, the Chromebook must be enabled for Linux. Not all Chromebooks support this feature. To enable this feature, open the Chromebook settings. Search for “Linux.” If your Chromebook supports Linux you will see the option “Turn on.” Press the button and wait a few moments. The Chromebook will install the components needed for Linux. After the installation is complete, you may need to restart your Chromebook. Once the Chromebook has restarted you will have the program “Terminal” on your computer. If you open this, you will be within the Linux environment. Update the environment, and install a text editor to use at the terminal.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install nano

From within the terminal, you can install dotnet core the same way that you would on a Raspberry Pi. Visit https://dotnet.microsoft.com and select “Downloads.” You will see options for downloading the .Net Framework or .Net Core. Select the “all .Net Core Downloads” link. The next screen has a list of the .Net Core downloads. At the time of this writing the most recent version is the .Net Core version 5.0. Select it to see the builds. My Chromebook uses an ARM processor, though there are some that use x86 processors. For my Chromebook I must download the ARM 32-bit Linux version. The next page will share a direct download link. Copy this link and open the terminal.

In the terminal, use the wget command to download the .Net Core installation. For the current version the command looks like the following.

wget https://download.visualstudio.microsoft.com/download/pr/97820d77-2dba-42f5-acb5-74c810112805/84c9a471b5f53d6aaa545fbeb449ad2a/dotnet-sdk-5.0.301-linux-arm.tar.gz

After a few moments, the installation has downloaded. To install it, we must make a folder into which it till be installed, unpackage the tar file, and add this folder to the path.

sudo mkdir /usr/share/dotnet
sudo tar xvf name_of_archive.tar.gz
sudo nano ~/.profile

That last command will open your .profile file, which contains a list of commands that are run when you login. Go to the end of the file and add these two lines.

export PATH=$PATH:/usr/share/dotnet
export DOTNET_ROOT=/usr/share/dotnet

.Net Core is installed, but we need to restart the Linux environment before it will work. Right-click on the terminal icon in the task bar and select the option to “Shut down Linux.” When open it again, the changes will be applied. If you find yourself opening the terminal frequently.

Installing Visual Studio Code is easier. Navigate to http://VisualStudio.com. You will see links to different editions of Visual Studio. Select the one for Visual Studio Code. Select the option to see More. On the next page select the download for the Linux deb ARM build of Visual Studio Code. After the file downloads, find it in your file system and double-click on it. An installer window will open. Select the option to Install. For me, the installation progress bar does not move for quite some time before it showed any change. After the installation is complete you can start it by typing ‘code’ in the command terminal or by finding it in your list of installed program.

Unencrypted HTTP on Android

Most network resources that I access with the Android applications that I build communicate over HTTPS. It isn’t often, but I sometimes access a resource over an unencrypted connection. This is usually the case for home automation and media control devices. One example of such an application is a Roku remote that I made for myself; the Roku accepts HTTP requests to simulate presses on the remote control. When I create something that needs to access resources over unencrypted HTTP, there’s a step I usually forget that leaves me wondering why I am receiving null responses back from my request.

When Android P was released, Google implemented a change to encourage developers to use encrypted HTTPS instead of unencrypted HTTP.

As part of a larger effort to move all network traffic away from cleartext (unencrypted HTTP) to TLS, we’re also changing the defaults for Network Security Configuration to block all cleartext traffic. You’ll now need to make connections over TLS, unless you explicitly opt-in to cleartext for specific domains.

Android Developer Blog, Dave Burke, BP of Engineering for Android

The most recent time that I forgot to enable unencrypted communication was perplexing. I was communicating with a device over UDP and HTTP, and I saw that the device was responding to the UDP requests and was a bit confused before I remembered the step that I had missed.

To enable unencrypted clear text communication generally within an application, there is an additional attribute that must be added to the <application> element in the application’s manifest.

android:usesCleartextTraffic="true"

That is going to enable HTTP traffic for all domains. I used this option for communication over a local network since there isn’t a specific domain that I can target for someone’s home connection. When there is a specific domain that your application must communicate with, you can create a network security policy permitting unencrypted communication to that domain. Before doing this, consider what information that your application is sending. If there is any personally identifying or sensitive information in your messages, then this option is not acceptable. If for some reason you cannot enable HTTPS (such as the domain being controlled by another entity) and you’ve reviewed the risks and consider them acceptable, then you can move forward with allowing unencrypted communication with that domain. To do that, you will need to create a new XML file resource in res/xml, giving it the name of your choosing. The contents of the file will look like the following. You will need to place the domain(s) of interest in the configuration.

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
    <domain-config cleartextTrafficPermitted="true">
        <domain includeSubdomains="true">yourtargetdomain.net</domain>
    </domain-config>
</network-security-config>

In the application’s manifest, add a reference to this policy. Assuming the above file is named “network_security_config.xml” the manifest entry would look like the following.

<application android:networkSecurityConfig="@xml/network_security_config" ... />

While using SSL/HTTPS is generally preferable and lower risks, there may be times when you must fallback on unencrypted clear text communication. When this happens, for Android-P and later your application must explicitly opt-in to clear text communication.

A Cellular Connection for the Raspberry Pi

I recently setup the Simcom 7600 on a Jetson Nano. When I spoke of it I had mentioned that there is also a version of the cellular model that is specifically made for the Raspberry Pi. After a few delivery delays I have one in my hands and am looking at it now. Since both the Pi and the Jetson are Linux based ARM devices and the models are both using the same chipset my expectation is for the setup to be similar. There are two primary versions of the device. The version that I am using was released in 2020 December. There are some slight differences in the labels on the headers between these models. The version that I use is also without an SD card slot. Some of the older versions have these.

When you purchase a Simcom 7600, there is usually a letter following that number. The letter lets you know which variant that you have. Different variants are best supported by mobile networks in different regions. E-H work best in Southeast and West Asia, Europe, and Africa. A-H are primarily for North America, Australia, New Zealand, Taiwan, and Latin America.

The Simcom 7600 connects to a Pi the same way you would connect any other hat. Well, mostly. There are some options in how it is connected. The most obvious way is to connect the 4G hat to the 40-pin connector. I used this method. To do so, I had to remove the cooling fan that I had on my Pi. Included in the box are a couple of standoffs and screws for securing the 7600 to the board. Personally, I feel that a Pi that is using a mobile connection should also have its own battery. To secure the board and my batter I had to use a different set of standoffs. But I’ve got everything working (minus the cooling fan).

The Simcom 7600 for the Pi has a couple of USB ports and jumpers on it. Before powering it on, I went to the documentation to see what these were all for. Starting with the yellow header, there is a jumper already bridging PWR and 3V3. This is to set a power-on option. In this default state, the SIMCOM 7600 will turn on any time that it receives power. If the jumper is moved to bridge PWR and D6, then the SIMCOM 7600 will be off by default, but the Pi can control the power state itself. A user can also control the state through the power button on the side of the device. A third option is to remove the jumper entirely. If the jumper is removed then the only way to control the devices power state is manually using the power button.

In addition to controlling power, you now also have the option to place the device in flight mode. The control flight-mode with the pi, bridge pins D4 and Flight with a jumper. If the jumper is present then then flight mode is controllable through software.

Just behind the headphone jack are another set of jumpers. The purpose of these headers was not immediately obvious to me at first. They are not mentioned in the manual. But they show up on the schematic for the SIMCOM 7600. This header is for deciding how communication with the SIMCOM 7600 will occur.

SIMCOM 7600 Communication Jumpers

The pins that lead to the SIMCOM chip itself are the TXD 3.3V and RXD 3.3V. These lines pass through a line converter to raise the signals to the voltage level that the SIMCOM uses. If the jumpers are in their top position (connecting U_RX to TXD 3.3V and U_TX to RXD 3.3V) then communication with the SIMCOM will occur over USB (specifically USBJ1). In the middle position, communication with the SIMCOM occurs over the Raspberry PI 40 pin header on pins 8 and 10 (P_TX and P_RX). In the lowest position, the USB port connects to the Pi with there being no connection made to the SIMCOM chip.

There is a second USB port on the board. What is that for? The second USB port connects directly to the SIMCOM itself. It has USB interface pins on the chip itself. That means that there are two ways to communicate with the SIMCOM 7600 chip.

There are only a few lines on the 40-pin header that interact with the SIMCOM 7600. I could restore the heatsink and fan to my SIMCOM 7600 and still allow the Pi to communicate over USB along with a few other lines. But I prefer to have the board secured to the Pi.

Leaving the settings in their default state, I’ll be communicating with the SIMCOM 7600 over both USB and using the 40-pin header. To minimize the number of things that I could forget to do that would result in the board being non-responsive, I’m going to leave it bolted to the board though to keep it more secure.

Before setup, ensure that you’ve updated the packages on your Raspberry Pi

sudo apt-get update
sudo apt-get upgrade

Ensure that the serial port on the pi is enabled. From the Pi desktop upen the Pi menu, select “Preferences.” Then select “Raspberry Pi Configuration.” In the Interfaces tab select “Enable” next to the “Serial Port” item. If it were not enabled before, you will need to reboot after you enable it.

Shutdown your Pi and remove power from it. Attach the 4G hat to the Pi and power it back up. You should see the Power light on the Pi illuminated solid red. If the Pi detects a cellular signal, the Net light will blink. If it is solid, ensure that you have securely attached the antenna and have the SIM card in place.

Open a command terminal and type

sudo lsusb

You will see some serial devices listed. Connect the Pi and the cellular modem using the USB port that is next to the cellular antenna. Then, from the command terminal, run the lsusb command again. You should see an additional item of hardware. If you do, then the Pi has detected the modem.

Let’s get the software installed. The drivers for the modem are in a *.7z file. You will need to install a tool for unarchiving the file. You also need to have a tool for interacting with the serial port.

sudo apt-get install minicom p7zip-full

Download and unpackage the example code for the SIMCOM 7600. Along side this sample code is the driver that is needed for the Raspberry Pi.

wget https://www.waveshare.com/w/upload/2/29/SIM7600X-4G-HAT-Demo.7z
7z x SIM7600X-4G-HAT-Demo.7z -r -o/home/pi
sudo chmod 777 -R /home/pi/SIM7600X-4G-HAT-Demo

When the Pi boots up, we want it to initialize the SIMCOM board. To ensure this happens, open /etc/rc.local and add the following line.

sh /home/pi/SIM7600X-4G-HAT-Demo/Raspberry/c/sim7600_4G_hat_init

After initialization, you can start interacting with the Pi hat. As a test that it is responding, you can connect to it using the minicom utility and send some AT commands and see that is responds. You can connect to it using either port /dev/ttyUSB2 or /dev/ttyUSB3.

minicom -D /dev/ttyUSB2 -b 115200

Updating Android Content without Redeploying

On short notice I received an assignment to put together a quick, functional prototype for an application. The prototype only needed to demonstrate that some bit of functionality was possible. I wanted to be able to update some of the assets that were used by the application without doing a redeploy. Part of the reason for this is that the application was going to be demonstrated by someone in another city, and I wouldn’t be able to do any last minute updates myself through a redeploy. I managed to put together a system that allowed me to make content updates on a website that the demonstration device could download when the application was run. I’m sharing that solution here.

A few things to keep in mind though. Since this was for a prototype that had to be put together rapidly, there are some implementation details that I would probably not do in a real application; such as performing the downloads using a thread instead of a coroutine.

To make this work, the application by design loads assets from the file system. The assets that it uses are packaged with the application. On first run, the app will pull those assets from its package and write them to the file system. The application that I am demonstrating here loads a list of images and captions for those images and displays them on the screen. For the asset collection that is baked into the application, I only have one image and one caption.

To demonstrate, I’ve created a new sample application (as I can’t share the prototype that I made) that list the images that it has on the screen. For the initial, this is a list of a single image. If you would like to see the complete code, you can clone it from https://github.com/j2inet/AndroidContentDownloadSample.git. When the application is run, it downloads an alternative content set. The images I used were taken in the High Museum of Art in Atlanta.

The application with only the packaged content and the web content.
The application at first run and a run after the content download has completed

There are a few folder locations that I’ll use for managing files. A complete content set will be present at the root of the application’s file. There will be a subfolder that will hold partially downloaded files when they are sourced from the internet. Once a file is completely downloaded, it will be moved to a different temporary folder. If the application is disrupted while downloading, anything that is in the partial download folder is considered incomplete and will be deleted. A file that is present in the completed folder is assumed to have all of it’s data and will not be downloaded again the next time the application starts. Once all files within a content set are downloaded, they are moved to the root of the application files system. This is the function that is used to ensure that the necessary folders are present.

companion object {
    public val TAG = "ContentUpdater"
    val STAGING_FOLDER = "staging"
    val COMPLETE_FOLDER = "completed"
}

fun ensureFoldersExists() {
   val applicationFilesFolder = context.filesDir.absoluteFile;
    val stagingFolderPath = Paths.get(applicationFilesFolder.absolutePath, STAGING_FOLDER)
    val stagingFolder:File =  stagingFolderPath.toFile()
    if(!stagingFolder.exists()) {
        stagingFolder.mkdir()
    }
    val downloadSetPath = Paths.get(applicationFilesFolder.absolutePath, COMPLETE_FOLDER)
    val completedFolder:File = downloadSetPath.toFile()
    if(!completedFolder.exists()) {
        completedFolder.mkdir()
    }
}


To package the assets, I added an “assets” folder to my Android project. By default the Android Studio project does not have an assets folder. To add one, within Android Studio, select File -> New -> Folder -> Assets Folder. Android Studio will place the Assets folder in the right place. Place the files that you want to be able to update within this folder in your project. Most of the files that I placed in this folder are specific to the application that I was working on and can largely be viewed as arbitrary. The one file that absolutely must be present for this system to work is an additional file I made named updates.json. The file 3 vital categories of data.

  "version": 0,
  "updateURL": "https://j2i.net/apps/downloader/updates.json",
  "assets": [
    {
      "url": "",
      "name": "assetsManifest.json"
    },
    {
      "url": "",
      "name": "image0.png"
    },
    {
      "url": "",
      "name": "caption0.txt"
    }
  ]
}

The most important category of content are the names of the files that make up the content. The code is going to use these names to know what assets to pull out of the application package. The other two important items are the asset version number and the update URL for grabbing updates. We will look at those items in a moment.

We want the code to check the file system to see if updates.json has already been extracted and written. If it is not present, then the code will copy it out of the package and place it in the file system. If it is already present, then it will not be overwritten. The file is never overwritten during this check because the files that is there on the filesystem could be a more recent version than what was packaged with the application. After the application has ensured that this file is present, it reads through the properties for each asset. Each asset is composed of a url (that indicates where the resource can be found) and a name (which will be used for the file name when the file is extracted). In the above, all of the files have an empty string for the URL. If the URL is not blank, then the file is assumed to be part of the application package. The routine for pulling out an asset and writing it to the file is based on something that is fairly routine. It accepts the name of the file and a flag indicating whether it should be overwritten if the file is already present. You might recall seeing a form of this function in the previous entry that I made on this blog.

private fun assetFilePath(context: Context, assetName: String, overwrite:Boolean = false): String? {
    val file = File(context.filesDir, assetName)
    if (!overwrite && file.exists() && file.length() > 0) {
        return file.absolutePath
    }
    try {
        context.assets.open(assetName).use { inputStream ->
            FileOutputStream(file).use { os ->
                val buffer = ByteArray(4 * 1024)
                var read: Int
                while (inputStream.read(buffer).also { read = it } != -1) {
                    os.write(buffer, 0, read)
                }
                os.flush()
            }
            return file.absolutePath
        }
    } catch (e: IOException) {
        Log.e(TAG, "Error process asset $assetName to file path")
    }
    return null
}

To ensure that the assetFilePath function is called on each file that must be pulled from the application, I’ve written the function extractAssetsFromApplication. This function is generously commented. I’ll let the comments explain what the function does.



fun extractAssetsFromApplication(minVersion:Int, overwrite:Boolean = false) {
    //ensure that updates.json exists in the file system
    val updateFileName = "updates.json"
    val file = File(context.filesDir, updateFileName)
    val updatesFilePath = assetFilePath(this.context,updateFileName, overwrite);
    //Load the contents of updates.json
    val updateFile = File(updatesFilePath).inputStream();
    val contents = updateFile.bufferedReader().readText()
    //Use a JSONObject to parse out the file's data
    val updateObject = JSONObject(contents)
    //IF the version in the file is below some version, assume that it is
    //an old version left over from a previous version of the application.
    //restart the extraction process with the overwrite flag set
    val assetVersion = updateObject.getInt("version")
    if(assetVersion < minVersion) {
        extractAssetsFromApplication(minVersion,true)
        return
    }
    //Let's start processing the individual asset items.
    val assetList = updateObject.get("assets") as JSONArray
    for(i in 0 until assetList.length()) {
        val currentObject = assetList.get(i) as JSONObject
        val currentFileName = currentObject.getString("name")
        val uri:String? =  currentObject.getString("url")

        if(uri.isNullOrEmpty() || uri == "null") {
            //There is no URL associated with the file. It must be within
            // the application package. Copy it from the application package
            //and write it to the file system
            assetFilePath(this.context, currentFileName, overwrite)
        } else {
            //If there is a URL associated with the asset, then add it to the download
            //queue. It will be downloaded later.
            val downloadRequest = ResourceDownloadRequest(currentFileName, URL(uri))
            downloadQueue.add(downloadRequest)
        }
    }
}

When the application first starts, we may need to address files that are lingering in the staging or completed folder. The completed folder contains files that have successfully been downloaded. But there may be other files for the file set that have yet to be downloaded. If the file set is complete there will be a file named “isComplete” in the folder. If that file is found, then the contents of the folder are copied to the root of the application’s file system and are deleted from the completed folder. Any files that are in the staging folder when the application starts are assumed to be incomplete. They are found and deleted.

fun applyCompleteDownloadSet() {
    val isCompleteFile = File(context.filesDir, COMPLETE_FOLDER + "/isComplete")
    if(!isCompleteFile.exists()) {
        return;
    }
    var downloadFolder = File(context.filesDir, COMPLETE_FOLDER)
    val fileListToMove = downloadFolder.listFiles()
    for(f:File in fileListToMove) {
        val destination = File(context.filesDir, f.name)
        f.copyTo(destination, true)
        f.delete()
    }
}


fun clearPartialDownload() {
    val stagingFolder = File(context.filesDir, STAGING_FOLDER)
    //If we have a staging folder, we need to check it's contents and delete them
    if(stagingFolder.exists())
    {
        val fileList = stagingFolder.listFiles()
        for(f in fileList) {
            f.delete()
        }
    }
}

To check for updates online, the application loads updates.json and reads the version number and the updateURL. The file at the updateURL is another instance of updates.json. Though if it is an update then it will contain a different set of content. The version in the online version of this file is compared to the local version of the file. If the online version has a greater number then it is downloaded. Otherwise no further work is done on the file. Any version of updates.json must have the url properties populated for the assets. If this value is missing, then the file is not valid. The download URLs and intended file names are collected (as the source URL might not contain the file name in it at all).

fun checkForUpdates() {
    thread {
        val updateFile = File(context.filesDir, "updates.json")
        val sourceUpdateText = updateFile.bufferedReader().readText()
        val updateStructure = JSONObject(sourceUpdateText)
        val currentVersion = updateStructure.getInt("version")
        val updateURL = URL(updateStructure.getString("updateURL"))
        val newUpdateText =
            updateURL.openConnection().getInputStream().bufferedReader().readText()
        val newUpdateStructure = JSONObject(newUpdateText)
        val newVersion = newUpdateStructure.getInt("version")
        if (newVersion > currentVersion) {
            val assetsList = newUpdateStructure.getJSONArray("assets")
            for (i: Int in 0 until assetsList.length()) {
                val current = assetsList.get(i) as JSONObject
                val dlRequest = ResourceDownloadRequest(
                    current.getString("name"),
                    URL(current.getString("url"))
                )
                downloadQueue.add(dlRequest)
            }
            downloadFiles();
        }
    }
}

The downloadFiles function starts to get into the real work of what the component does. For any file, this function will make up to three attempts to download the file before it gives up on the file. The file contents are downloaded through the URL object. The URL object provides an outputStream to the resource identified through the URL. I’m arbitrarily downloading the file in 8 kilobyte chunks (8192 bytes). As mentioned before, the chunks are written to a temporary folder. Once a file is completed, it gets moved.

    @WorkerThread
    fun downloadFiles() {
        val MAX_RETRY_COUNT = 3
        val failedQueue = LinkedList<ResourceDownloadRequest>()
        var retryCount = 0;
        while(retryCount<MAX_RETRY_COUNT && downloadQueue.count()>0) {

            while (downloadQueue.count()>0) {
                val current = downloadQueue.pop()
                try {
                    downloadFile(current)
                } catch (exc: IOException) {
                    failedQueue.add(current)
                }
            }
            downloadQueue.clear()
            downloadQueue.addAll(failedQueue)
            ++retryCount;
        }
        if(downloadQueue.count()>0) {
            //we've failed to download a complete set.
        } else {
            //A complete set was downloaded
            //I'll mark a set as complete by creating a file. The presence of this file
            //markets a complete set. An absence would indicate a failure.
            val isCompleteFile = File(context.filesDir, COMPLETE_FOLDER + "/isComplete")
            isCompleteFile.createNewFile()
        }
    }

    fun downloadFile(d:ResourceDownloadRequest) {
        downloadFile(d.name, d.source)
    }

    fun downloadFile(name:String, source: URL) {
        val DOWNLOAD_BUFFER_SIZE = 8192
        val urlConnection:URLConnection = source.openConnection()
        urlConnection.connect();
        val length:Int = urlConnection.contentLength

        val inputStream:InputStream = BufferedInputStream(source.openStream(), DOWNLOAD_BUFFER_SIZE)
        val targetFile = File(context.filesDir, STAGING_FOLDER + "/"+ name)
        targetFile.createNewFile();
        val outputStream = targetFile.outputStream()
        val buffer = ByteArray(DOWNLOAD_BUFFER_SIZE)
        var bytesRead = 0
        var totalBytesRead = 0;
        var percentageComplete = 0.0f
        do {
            bytesRead = inputStream.read(buffer,0,DOWNLOAD_BUFFER_SIZE)
            if(bytesRead>-1) {
                totalBytesRead += bytesRead
                percentageComplete = 100F * totalBytesRead.toFloat() / length.toFloat()
                outputStream.write(buffer, 0, bytesRead)
            }
        } while(bytesRead > -1)
        outputStream.close()
        inputStream.close()
        val destinationFile = File(context.filesDir, COMPLETE_FOLDER + "/"+ name)
        targetFile.copyTo(destinationFile, true, DEFAULT_BUFFER_SIZE)
        targetFile.delete()
    }

That covers all of the more complex functionality in the code. How is it used? Usage starts with the constructor. When the ContentUpdater is extantiated, it will create the folders (if they do not already exists), extract the content from the application (if there is no content present) and clear the partial download folder. It does not automatically apply the new downloaded content to the application.

class ContentUpdater {

    companion object {
        public val TAG = "ContentUpdater"
        val STAGING_FOLDER = "staging"
        val COMPLETE_FOLDER = "completed"
    }
    val context:Context
    val downloadQueue = LinkedList<ResourceDownloadRequest>()

    constructor(context: Context, minVersion:Int) {
        this.context = context

        ensureFoldersExists()
        extractAssetsFromApplication(minVersion);
        this.clearPartialDownload()
    }
}

In theory, I could have the routine do this as soon as a complete download set is preset. But changing the content in the middle of a session within an application could cause problems. The application using the component could ask the component to apply downloaded content at any time through by calling applyCompleteDownloadSet(). I have the application doing this in the onCreate event of the main activity. That way the most recent content is applied before the reset of the application begins to get initialized.

There are a lot of scenarios that I might consider if I ever use something like this in a production application. This includes possibly notifying the user of the progress of the download, giving the user the option to load the new content once it is complete, and some other scenarios on handling having multiple versions of the application in user’s hands at once. I would also move the download code to either a coroutine (instead of a thread) or possibly a service (for larger downloads) and consider limiting the downloads to WiFi. I wouldn’t suggest the code that I’ve presented here to be copied directly into a production application, But it can be a good starting point if you are trying to figure out your own solution.

Runtime Extraction of Android Assets

If you needed to include additional information with your Android application that isn’t already supported by Android Studio and the various functionality natively, one solution is to place the content in the project’s Assets folder. By default, a new projects do not have an Assets folder. You can easily add one through the menu sequence File -> New -> Folder -> Assets Folder. Within this folder. Assets that you add to this folder will now be packaged with you app. They will also be compressed.

You have the option of not compressing the files. You may want to do this if the files are already in a compressed format and thus are not significantly reduced in size by additional compression. If you want a file type exempted for compression, you can direct the compiler to not compress it by making an addition to the build.gradle for the module. If I wanted txt files to be exempted from compression, I would make the following addition.

android {
    aaptOptions {
        noCompress 'txt'
    }
}

Uncompressed files are easy to read. If I placed a files named “readMe.txt” in my assets folder, I can get an InputStream for the file with the following code line.

val myInputStream = context.assets.open("readMe.txt")

You may want to write the files out to the files system for faster access. The following function, when given the name of an asset, will return the absolute path to the location of the file derived from the asset. It first checks to see if the asset has already be extracted to a file. If it has not, then it will take care of extracting it. Accessing the assets this way has an advantage. After an application has been deployed, your application could at runtime check a web location for updated versions of the assets and write them to the file system. Without any further changes in logic, the application could just attempt to read the asset as normal and it will receive the updated version.

    fun assetFilePath(context: Context, assetName: String): String? {
        val file = File(context.filesDir, assetName)
        if (file.exists() && file.length() > 0) {
            return file.absolutePath
        }
        try {
            context.assets.open(assetName).use { inputStream ->
                FileOutputStream(file).use { os ->
                    val buffer = ByteArray(4 * 1024)
                    var read: Int
                    while (inputStream.read(buffer).also { read = it } != -1) {
                        os.write(buffer, 0, read)
                    }
                    os.flush()
                }
                return file.absolutePath
            }
        } catch (e: IOException) {
            Log.e(TAG, "Error process asset $assetName to file path")
        }
        return null
    }

In my next entry, I’ll be using this function to create an application that can also update its content from online content.

Samsung provides some Clarity in the Google Wearable Collaboration

At the last Google IO Conference, Google made a rather ambiguous announcement about their partnership with Samsung and watches. Samsung currently sells their Gear watches running an operating system that they made in collaboration with a few other companies. In the announcement, Google said that they were combining their Wear OS operating system with Samsung’s Tizen operating system. What exactly does this mean? There was not clarification given during the conference. Looking at the conference sessions, there were two sessions on development for Google’s Android OS.

Generally speaking, one can’t just combine two operating systems. They could build a different operating system that has support for the applications from another OS or take designs from the UI of an OS and apply it to another. But there isn’t anything meaningful in the phrase “Combine operating system.” Jumping over to the Samsung Developer forums, I found there were people with similar questions, all of which were met with the reply “We can’t give you more information at this time.”

Information was finally made available earlier this week. In summary, Samsung is going to adopt Wear OS (Android) for their watches. They said that they will support the existing Tizen based watches for another three years. That announcement was surprisingly more direct than I’ve seen Samsung be with other products that they sunset. What I’ve usually seen is that new versions of a product stop coming without any announcement being made (Their Tizen based Z phones, the Gear 360, and Gear VR headsets are all examples of products for which this happened).

If you would like to see the announcement yourself, you can view it in the YouTube video below. The part of interest can be found at time marker 11:25 and continues to the announcement of 3 years of Tizen support at time marker 16:38. What exactly is meant by “support” could still get more clarification. I expect this to at least mean that developers will be able to submit and update applications for the next few years, but Samsung will be giving significantly less resources to Tizen wearable.

This leaves Samsung’s TVs as their last category of hardware that uses the Tizen operating system.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Testing a Faraday Bag with AirTags

Among my many gadgets I have a Faraday Bag. Faraday bags are essentially a flexible version of a faraday cage. Such devices contain metalic content and prevent the passage of radio signals. You have probably seen various applications of this, such as wallets or envelops designed to prevent an NFC credit card from being read, or the metalic grid in the door of a Microwave oven that prevents the microwave radiation from getting out.

I won’t get into the physics of how these work. But it is worth noting that a Faraday cage may only work for a range frequencies. A cage that prevents one device from getting a signal might not have the same effect on another that uses a different frequency. While I’ve seen that my Faraday Bag has successfully blocked WiFi and cellular signals from reaching my phone and tablet, I wanted to see if it would work with an AirTag. For those unfamiliar, the AirTag is Apple’s implementation of a Bluetooth tracking device. Another well known Bluetooth tracker is from Tile. The fundamentals of how these devices work is essentially the same.

AirTags on top of Faraday Bag

The trackers are low-energy Bluetooth devices. If the tracker is near your phone, the phone detects the signal and the ID unique to the tracker. The phone takes notice where it was located when it looses signal to the tracker and generally assumes that the tracker is in the last place that it was when it received a signal. That isn’t always the case. The tracker my have been moved after the phone lost the signal (think of a device left in a taxi). The next method of locating that these devices use is that other people’s phones may see the tracker and relay the position. For the Tile devices anyone else that has the Tile app on their phone effectively participates in relaying the position of tiles that they encounter. For the AirTag anyone with a fairly recent iPhone and Firmware participates. My expectation is that that the ubiquity of the iPhone will make it the location network with more coverage. As a test, I gave an AirTag to a wiling participant and asked that they keep the device for a day. When I checked in on the location of the Device using the “Find My” app on the iPhone, I could see the person’s movements. On a commute to work, other iPhones that the person drove by on the Interstate reported the position. I could see the person’s location within a few minutes of them arriving at work.

There are some obvious privacy concerns with these devices. Primarily from an unwilling party having an AirTag put in their belongings. Apple is working on some solutions for some of the security concerns, though others remain. I thought about someone transporting a device with an AirTag that may not want their location located. One way to do this is to remove the battery. Another is to block the signal. Since I already have a Faraday Bag I decided to test out this second method.

I found that my Faraday Bag successfully blocks the AirTag from being detected or from receiving a signal. You can see the test in the above video. This addresses one of the concern for such trackers, though not all of them. This is great for an AirTag that one is knowingly transporting. For one that a person doesn’t realize is in their belongings, a method of detection is needed. For iPhone users, the iPhone is reported to alert a user if there is an AirTag that stays within their proximity that is not their own. Results from others testing this have been a bit mixed. The AirTags are also reportedly going to play an alert sound if they arenot within range of their owner for some random interval between 8 and 24 hours.

Presently, Android users would not get a warning. Though Apple is said to be working on an application for Android for detecting lingering AirTags. In the absence of such an application, I’ve tried using Bluetooth scanners on Android. The Airtag is successfully detected. The vendor (Apple) can be retrieved from the AirTag, but no other information is retrievable. I’ve got some ideas on how to specifically identify an AirTag within code for Android, but need to do more testing to validate this. This is something that I plan to return to later on.

I purchased this Faraday Bag some time ago. The specific bag that I have is, from what I have found, no longer available. But other comparable bags are available on Amazon.

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Faraday Bag for Phones

Faraday Bag for Tablets and Phones.

Silicon AirTag Case

Silicon AirTag Case

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet