Updating Android Content without Redeploying

On short notice I received an assignment to put together a quick, functional prototype for an application. The prototype only needed to demonstrate that some bit of functionality was possible. I wanted to be able to update some of the assets that were used by the application without doing a redeploy. Part of the reason for this is that the application was going to be demonstrated by someone in another city, and I wouldn’t be able to do any last minute updates myself through a redeploy. I managed to put together a system that allowed me to make content updates on a website that the demonstration device could download when the application was run. I’m sharing that solution here.

A few things to keep in mind though. Since this was for a prototype that had to be put together rapidly, there are some implementation details that I would probably not do in a real application; such as performing the downloads using a thread instead of a coroutine.

To make this work, the application by design loads assets from the file system. The assets that it uses are packaged with the application. On first run, the app will pull those assets from its package and write them to the file system. The application that I am demonstrating here loads a list of images and captions for those images and displays them on the screen. For the asset collection that is baked into the application, I only have one image and one caption.

To demonstrate, I’ve created a new sample application (as I can’t share the prototype that I made) that list the images that it has on the screen. For the initial, this is a list of a single image. If you would like to see the complete code, you can clone it from https://github.com/j2inet/AndroidContentDownloadSample.git. When the application is run, it downloads an alternative content set. The images I used were taken in the High Museum of Art in Atlanta.

The application with only the packaged content and the web content.
The application at first run and a run after the content download has completed

There are a few folder locations that I’ll use for managing files. A complete content set will be present at the root of the application’s file. There will be a subfolder that will hold partially downloaded files when they are sourced from the internet. Once a file is completely downloaded, it will be moved to a different temporary folder. If the application is disrupted while downloading, anything that is in the partial download folder is considered incomplete and will be deleted. A file that is present in the completed folder is assumed to have all of it’s data and will not be downloaded again the next time the application starts. Once all files within a content set are downloaded, they are moved to the root of the application files system. This is the function that is used to ensure that the necessary folders are present.

companion object {
    public val TAG = "ContentUpdater"
    val STAGING_FOLDER = "staging"
    val COMPLETE_FOLDER = "completed"
}

fun ensureFoldersExists() {
   val applicationFilesFolder = context.filesDir.absoluteFile;
    val stagingFolderPath = Paths.get(applicationFilesFolder.absolutePath, STAGING_FOLDER)
    val stagingFolder:File =  stagingFolderPath.toFile()
    if(!stagingFolder.exists()) {
        stagingFolder.mkdir()
    }
    val downloadSetPath = Paths.get(applicationFilesFolder.absolutePath, COMPLETE_FOLDER)
    val completedFolder:File = downloadSetPath.toFile()
    if(!completedFolder.exists()) {
        completedFolder.mkdir()
    }
}


To package the assets, I added an “assets” folder to my Android project. By default the Android Studio project does not have an assets folder. To add one, within Android Studio, select File -> New -> Folder -> Assets Folder. Android Studio will place the Assets folder in the right place. Place the files that you want to be able to update within this folder in your project. Most of the files that I placed in this folder are specific to the application that I was working on and can largely be viewed as arbitrary. The one file that absolutely must be present for this system to work is an additional file I made named updates.json. The file 3 vital categories of data.

  "version": 0,
  "updateURL": "https://j2i.net/apps/downloader/updates.json",
  "assets": [
    {
      "url": "",
      "name": "assetsManifest.json"
    },
    {
      "url": "",
      "name": "image0.png"
    },
    {
      "url": "",
      "name": "caption0.txt"
    }
  ]
}

The most important category of content are the names of the files that make up the content. The code is going to use these names to know what assets to pull out of the application package. The other two important items are the asset version number and the update URL for grabbing updates. We will look at those items in a moment.

We want the code to check the file system to see if updates.json has already been extracted and written. If it is not present, then the code will copy it out of the package and place it in the file system. If it is already present, then it will not be overwritten. The file is never overwritten during this check because the files that is there on the filesystem could be a more recent version than what was packaged with the application. After the application has ensured that this file is present, it reads through the properties for each asset. Each asset is composed of a url (that indicates where the resource can be found) and a name (which will be used for the file name when the file is extracted). In the above, all of the files have an empty string for the URL. If the URL is not blank, then the file is assumed to be part of the application package. The routine for pulling out an asset and writing it to the file is based on something that is fairly routine. It accepts the name of the file and a flag indicating whether it should be overwritten if the file is already present. You might recall seeing a form of this function in the previous entry that I made on this blog.

private fun assetFilePath(context: Context, assetName: String, overwrite:Boolean = false): String? {
    val file = File(context.filesDir, assetName)
    if (!overwrite && file.exists() && file.length() > 0) {
        return file.absolutePath
    }
    try {
        context.assets.open(assetName).use { inputStream ->
            FileOutputStream(file).use { os ->
                val buffer = ByteArray(4 * 1024)
                var read: Int
                while (inputStream.read(buffer).also { read = it } != -1) {
                    os.write(buffer, 0, read)
                }
                os.flush()
            }
            return file.absolutePath
        }
    } catch (e: IOException) {
        Log.e(TAG, "Error process asset $assetName to file path")
    }
    return null
}

To ensure that the assetFilePath function is called on each file that must be pulled from the application, I’ve written the function extractAssetsFromApplication. This function is generously commented. I’ll let the comments explain what the function does.



fun extractAssetsFromApplication(minVersion:Int, overwrite:Boolean = false) {
    //ensure that updates.json exists in the file system
    val updateFileName = "updates.json"
    val file = File(context.filesDir, updateFileName)
    val updatesFilePath = assetFilePath(this.context,updateFileName, overwrite);
    //Load the contents of updates.json
    val updateFile = File(updatesFilePath).inputStream();
    val contents = updateFile.bufferedReader().readText()
    //Use a JSONObject to parse out the file's data
    val updateObject = JSONObject(contents)
    //IF the version in the file is below some version, assume that it is
    //an old version left over from a previous version of the application.
    //restart the extraction process with the overwrite flag set
    val assetVersion = updateObject.getInt("version")
    if(assetVersion < minVersion) {
        extractAssetsFromApplication(minVersion,true)
        return
    }
    //Let's start processing the individual asset items.
    val assetList = updateObject.get("assets") as JSONArray
    for(i in 0 until assetList.length()) {
        val currentObject = assetList.get(i) as JSONObject
        val currentFileName = currentObject.getString("name")
        val uri:String? =  currentObject.getString("url")

        if(uri.isNullOrEmpty() || uri == "null") {
            //There is no URL associated with the file. It must be within
            // the application package. Copy it from the application package
            //and write it to the file system
            assetFilePath(this.context, currentFileName, overwrite)
        } else {
            //If there is a URL associated with the asset, then add it to the download
            //queue. It will be downloaded later.
            val downloadRequest = ResourceDownloadRequest(currentFileName, URL(uri))
            downloadQueue.add(downloadRequest)
        }
    }
}

When the application first starts, we may need to address files that are lingering in the staging or completed folder. The completed folder contains files that have successfully been downloaded. But there may be other files for the file set that have yet to be downloaded. If the file set is complete there will be a file named “isComplete” in the folder. If that file is found, then the contents of the folder are copied to the root of the application’s file system and are deleted from the completed folder. Any files that are in the staging folder when the application starts are assumed to be incomplete. They are found and deleted.

fun applyCompleteDownloadSet() {
    val isCompleteFile = File(context.filesDir, COMPLETE_FOLDER + "/isComplete")
    if(!isCompleteFile.exists()) {
        return;
    }
    var downloadFolder = File(context.filesDir, COMPLETE_FOLDER)
    val fileListToMove = downloadFolder.listFiles()
    for(f:File in fileListToMove) {
        val destination = File(context.filesDir, f.name)
        f.copyTo(destination, true)
        f.delete()
    }
}


fun clearPartialDownload() {
    val stagingFolder = File(context.filesDir, STAGING_FOLDER)
    //If we have a staging folder, we need to check it's contents and delete them
    if(stagingFolder.exists())
    {
        val fileList = stagingFolder.listFiles()
        for(f in fileList) {
            f.delete()
        }
    }
}

To check for updates online, the application loads updates.json and reads the version number and the updateURL. The file at the updateURL is another instance of updates.json. Though if it is an update then it will contain a different set of content. The version in the online version of this file is compared to the local version of the file. If the online version has a greater number then it is downloaded. Otherwise no further work is done on the file. Any version of updates.json must have the url properties populated for the assets. If this value is missing, then the file is not valid. The download URLs and intended file names are collected (as the source URL might not contain the file name in it at all).

fun checkForUpdates() {
    thread {
        val updateFile = File(context.filesDir, "updates.json")
        val sourceUpdateText = updateFile.bufferedReader().readText()
        val updateStructure = JSONObject(sourceUpdateText)
        val currentVersion = updateStructure.getInt("version")
        val updateURL = URL(updateStructure.getString("updateURL"))
        val newUpdateText =
            updateURL.openConnection().getInputStream().bufferedReader().readText()
        val newUpdateStructure = JSONObject(newUpdateText)
        val newVersion = newUpdateStructure.getInt("version")
        if (newVersion > currentVersion) {
            val assetsList = newUpdateStructure.getJSONArray("assets")
            for (i: Int in 0 until assetsList.length()) {
                val current = assetsList.get(i) as JSONObject
                val dlRequest = ResourceDownloadRequest(
                    current.getString("name"),
                    URL(current.getString("url"))
                )
                downloadQueue.add(dlRequest)
            }
            downloadFiles();
        }
    }
}

The downloadFiles function starts to get into the real work of what the component does. For any file, this function will make up to three attempts to download the file before it gives up on the file. The file contents are downloaded through the URL object. The URL object provides an outputStream to the resource identified through the URL. I’m arbitrarily downloading the file in 8 kilobyte chunks (8192 bytes). As mentioned before, the chunks are written to a temporary folder. Once a file is completed, it gets moved.

    @WorkerThread
    fun downloadFiles() {
        val MAX_RETRY_COUNT = 3
        val failedQueue = LinkedList<ResourceDownloadRequest>()
        var retryCount = 0;
        while(retryCount<MAX_RETRY_COUNT && downloadQueue.count()>0) {

            while (downloadQueue.count()>0) {
                val current = downloadQueue.pop()
                try {
                    downloadFile(current)
                } catch (exc: IOException) {
                    failedQueue.add(current)
                }
            }
            downloadQueue.clear()
            downloadQueue.addAll(failedQueue)
            ++retryCount;
        }
        if(downloadQueue.count()>0) {
            //we've failed to download a complete set.
        } else {
            //A complete set was downloaded
            //I'll mark a set as complete by creating a file. The presence of this file
            //markets a complete set. An absence would indicate a failure.
            val isCompleteFile = File(context.filesDir, COMPLETE_FOLDER + "/isComplete")
            isCompleteFile.createNewFile()
        }
    }

    fun downloadFile(d:ResourceDownloadRequest) {
        downloadFile(d.name, d.source)
    }

    fun downloadFile(name:String, source: URL) {
        val DOWNLOAD_BUFFER_SIZE = 8192
        val urlConnection:URLConnection = source.openConnection()
        urlConnection.connect();
        val length:Int = urlConnection.contentLength

        val inputStream:InputStream = BufferedInputStream(source.openStream(), DOWNLOAD_BUFFER_SIZE)
        val targetFile = File(context.filesDir, STAGING_FOLDER + "/"+ name)
        targetFile.createNewFile();
        val outputStream = targetFile.outputStream()
        val buffer = ByteArray(DOWNLOAD_BUFFER_SIZE)
        var bytesRead = 0
        var totalBytesRead = 0;
        var percentageComplete = 0.0f
        do {
            bytesRead = inputStream.read(buffer,0,DOWNLOAD_BUFFER_SIZE)
            if(bytesRead>-1) {
                totalBytesRead += bytesRead
                percentageComplete = 100F * totalBytesRead.toFloat() / length.toFloat()
                outputStream.write(buffer, 0, bytesRead)
            }
        } while(bytesRead > -1)
        outputStream.close()
        inputStream.close()
        val destinationFile = File(context.filesDir, COMPLETE_FOLDER + "/"+ name)
        targetFile.copyTo(destinationFile, true, DEFAULT_BUFFER_SIZE)
        targetFile.delete()
    }

That covers all of the more complex functionality in the code. How is it used? Usage starts with the constructor. When the ContentUpdater is extantiated, it will create the folders (if they do not already exists), extract the content from the application (if there is no content present) and clear the partial download folder. It does not automatically apply the new downloaded content to the application.

class ContentUpdater {

    companion object {
        public val TAG = "ContentUpdater"
        val STAGING_FOLDER = "staging"
        val COMPLETE_FOLDER = "completed"
    }
    val context:Context
    val downloadQueue = LinkedList<ResourceDownloadRequest>()

    constructor(context: Context, minVersion:Int) {
        this.context = context

        ensureFoldersExists()
        extractAssetsFromApplication(minVersion);
        this.clearPartialDownload()
    }
}

In theory, I could have the routine do this as soon as a complete download set is preset. But changing the content in the middle of a session within an application could cause problems. The application using the component could ask the component to apply downloaded content at any time through by calling applyCompleteDownloadSet(). I have the application doing this in the onCreate event of the main activity. That way the most recent content is applied before the reset of the application begins to get initialized.

There are a lot of scenarios that I might consider if I ever use something like this in a production application. This includes possibly notifying the user of the progress of the download, giving the user the option to load the new content once it is complete, and some other scenarios on handling having multiple versions of the application in user’s hands at once. I would also move the download code to either a coroutine (instead of a thread) or possibly a service (for larger downloads) and consider limiting the downloads to WiFi. I wouldn’t suggest the code that I’ve presented here to be copied directly into a production application, But it can be a good starting point if you are trying to figure out your own solution.

Runtime Extraction of Android Assets

If you needed to include additional information with your Android application that isn’t already supported by Android Studio and the various functionality natively, one solution is to place the content in the project’s Assets folder. By default, a new projects do not have an Assets folder. You can easily add one through the menu sequence File -> New -> Folder -> Assets Folder. Within this folder. Assets that you add to this folder will now be packaged with you app. They will also be compressed.

You have the option of not compressing the files. You may want to do this if the files are already in a compressed format and thus are not significantly reduced in size by additional compression. If you want a file type exempted for compression, you can direct the compiler to not compress it by making an addition to the build.gradle for the module. If I wanted txt files to be exempted from compression, I would make the following addition.

android {
    aaptOptions {
        noCompress 'txt'
    }
}

Uncompressed files are easy to read. If I placed a files named “readMe.txt” in my assets folder, I can get an InputStream for the file with the following code line.

val myInputStream = context.assets.open("readMe.txt")

You may want to write the files out to the files system for faster access. The following function, when given the name of an asset, will return the absolute path to the location of the file derived from the asset. It first checks to see if the asset has already be extracted to a file. If it has not, then it will take care of extracting it. Accessing the assets this way has an advantage. After an application has been deployed, your application could at runtime check a web location for updated versions of the assets and write them to the file system. Without any further changes in logic, the application could just attempt to read the asset as normal and it will receive the updated version.

    fun assetFilePath(context: Context, assetName: String): String? {
        val file = File(context.filesDir, assetName)
        if (file.exists() && file.length() > 0) {
            return file.absolutePath
        }
        try {
            context.assets.open(assetName).use { inputStream ->
                FileOutputStream(file).use { os ->
                    val buffer = ByteArray(4 * 1024)
                    var read: Int
                    while (inputStream.read(buffer).also { read = it } != -1) {
                        os.write(buffer, 0, read)
                    }
                    os.flush()
                }
                return file.absolutePath
            }
        } catch (e: IOException) {
            Log.e(TAG, "Error process asset $assetName to file path")
        }
        return null
    }

In my next entry, I’ll be using this function to create an application that can also update its content from online content.

Samsung provides some Clarity in the Google Wearable Collaboration

At the last Google IO Conference, Google made a rather ambiguous announcement about their partnership with Samsung and watches. Samsung currently sells their Gear watches running an operating system that they made in collaboration with a few other companies. In the announcement, Google said that they were combining their Wear OS operating system with Samsung’s Tizen operating system. What exactly does this mean? There was not clarification given during the conference. Looking at the conference sessions, there were two sessions on development for Google’s Android OS.

Generally speaking, one can’t just combine two operating systems. They could build a different operating system that has support for the applications from another OS or take designs from the UI of an OS and apply it to another. But there isn’t anything meaningful in the phrase “Combine operating system.” Jumping over to the Samsung Developer forums, I found there were people with similar questions, all of which were met with the reply “We can’t give you more information at this time.”

Information was finally made available earlier this week. In summary, Samsung is going to adopt Wear OS (Android) for their watches. They said that they will support the existing Tizen based watches for another three years. That announcement was surprisingly more direct than I’ve seen Samsung be with other products that they sunset. What I’ve usually seen is that new versions of a product stop coming without any announcement being made (Their Tizen based Z phones, the Gear 360, and Gear VR headsets are all examples of products for which this happened).

If you would like to see the announcement yourself, you can view it in the YouTube video below. The part of interest can be found at time marker 11:25 and continues to the announcement of 3 years of Tizen support at time marker 16:38. What exactly is meant by “support” could still get more clarification. I expect this to at least mean that developers will be able to submit and update applications for the next few years, but Samsung will be giving significantly less resources to Tizen wearable.

This leaves Samsung’s TVs as their last category of hardware that uses the Tizen operating system.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Testing a Faraday Bag with AirTags

Among my many gadgets I have a Faraday Bag. Faraday bags are essentially a flexible version of a faraday cage. Such devices contain metalic content and prevent the passage of radio signals. You have probably seen various applications of this, such as wallets or envelops designed to prevent an NFC credit card from being read, or the metalic grid in the door of a Microwave oven that prevents the microwave radiation from getting out.

I won’t get into the physics of how these work. But it is worth noting that a Faraday cage may only work for a range frequencies. A cage that prevents one device from getting a signal might not have the same effect on another that uses a different frequency. While I’ve seen that my Faraday Bag has successfully blocked WiFi and cellular signals from reaching my phone and tablet, I wanted to see if it would work with an AirTag. For those unfamiliar, the AirTag is Apple’s implementation of a Bluetooth tracking device. Another well known Bluetooth tracker is from Tile. The fundamentals of how these devices work is essentially the same.

AirTags on top of Faraday Bag

The trackers are low-energy Bluetooth devices. If the tracker is near your phone, the phone detects the signal and the ID unique to the tracker. The phone takes notice where it was located when it looses signal to the tracker and generally assumes that the tracker is in the last place that it was when it received a signal. That isn’t always the case. The tracker my have been moved after the phone lost the signal (think of a device left in a taxi). The next method of locating that these devices use is that other people’s phones may see the tracker and relay the position. For the Tile devices anyone else that has the Tile app on their phone effectively participates in relaying the position of tiles that they encounter. For the AirTag anyone with a fairly recent iPhone and Firmware participates. My expectation is that that the ubiquity of the iPhone will make it the location network with more coverage. As a test, I gave an AirTag to a wiling participant and asked that they keep the device for a day. When I checked in on the location of the Device using the “Find My” app on the iPhone, I could see the person’s movements. On a commute to work, other iPhones that the person drove by on the Interstate reported the position. I could see the person’s location within a few minutes of them arriving at work.

There are some obvious privacy concerns with these devices. Primarily from an unwilling party having an AirTag put in their belongings. Apple is working on some solutions for some of the security concerns, though others remain. I thought about someone transporting a device with an AirTag that may not want their location located. One way to do this is to remove the battery. Another is to block the signal. Since I already have a Faraday Bag I decided to test out this second method.

I found that my Faraday Bag successfully blocks the AirTag from being detected or from receiving a signal. You can see the test in the above video. This addresses one of the concern for such trackers, though not all of them. This is great for an AirTag that one is knowingly transporting. For one that a person doesn’t realize is in their belongings, a method of detection is needed. For iPhone users, the iPhone is reported to alert a user if there is an AirTag that stays within their proximity that is not their own. Results from others testing this have been a bit mixed. The AirTags are also reportedly going to play an alert sound if they arenot within range of their owner for some random interval between 8 and 24 hours.

Presently, Android users would not get a warning. Though Apple is said to be working on an application for Android for detecting lingering AirTags. In the absence of such an application, I’ve tried using Bluetooth scanners on Android. The Airtag is successfully detected. The vendor (Apple) can be retrieved from the AirTag, but no other information is retrievable. I’ve got some ideas on how to specifically identify an AirTag within code for Android, but need to do more testing to validate this. This is something that I plan to return to later on.

I purchased this Faraday Bag some time ago. The specific bag that I have is, from what I have found, no longer available. But other comparable bags are available on Amazon.

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Faraday Bag for Phones

Faraday Bag for Tablets and Phones.

Silicon AirTag Case

Silicon AirTag Case

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Controlling WiFi State in an Android App

Many apps require a network connection. Provided the connection meets bandwidth requirements, the apps don’t typically care how that connection is established. But for applications that engage in controlling home devices the application may specifically want to communicate over WiFi. In Android there are two ways that this has been handled. Some applications will turn on WiFi if it is turned off. Others may prompt the user to turn on WiFi.

Of the two options, the application turning on WiFi is an approach that is not supported on Android Q and later. This is part of the privacy changes in Android Q. For older versions of Android, controlling the WiFi could be performed though the WIFI_SERVICE.

val wifi = getSystemService(WIFI_SERVICE) as WifiManager
if(!wifi.isWifiEnabled)
    wifi.isWifiEnabled = true;

If you try this now, Android Studio will give a deprecation warning. To have the user change the WiFi state the application can open the WiFi control interface for the user. Rather than dump the user into the WiFi control interface, it is generally better to prompt them first. The WiFi control UI can be opened with an intent for ACTION_WIRELESS_SETTINGS.

startActivity(Intent(Settings.ACTION_WIRELESS_SETTINGS));

An easy way to prompt the user is to use the AlertDialogBuilder. A complete solution looks like the following.

    fun checkWifiState() {
        val wifi = getSystemService(WIFI_SERVICE) as WifiManager
        if(!wifi.isWifiEnabled) {

            val dialogClickListener =
                DialogInterface.OnClickListener { dialog, which ->
                    when (which) {
                        DialogInterface.BUTTON_POSITIVE -> {
                            startActivity(Intent(Settings.ACTION_WIRELESS_SETTINGS));
                        }
                        DialogInterface.BUTTON_NEGATIVE -> {
                        }
                    }
                }

            var builder:AlertDialog.Builder = AlertDialog.Builder(this) as AlertDialog.Builder
            builder.setMessage("WiFi must be enabled. Do you want to open the WiFi controls?")
                .setPositiveButton("Yes", dialogClickListener)
                .setNegativeButton("No", dialogClickListener)
                .show()
        }
    }

Waveshare 7600X Jetson Schematic Review

I’ve been using the Waveshare 7600 4G for the Jetson Nano (hereon referred to simply as the 7600). It gives the Jetson a connection for 4G mobile networks, GPS, and the ability to make and receive phone calls and SMS. (If you are looking for information on getting a WiFi connection see this post.) The Waveshare 7600 4G is a board built around the SIMCOM SIM7600X, an integrated circuit for providing communications functionality for a mobile phone. Waveshare makes variations of this device for both the Raspberry Pi and the Jetson Nano. While there is a wide amount of overlap in the usage of both devices, there are differences that are not immediately apparent from the top-level documentation. That said, reading documentation on one gives insight in using the other. The pin assignments used differ, but most of usage and functionality are the same.

When I initially looked at the Waveshare 7600 4G for the Jetson, there were elements of usage that didn’t make sense to me without consulting documentation from the chip maker and the schematic provided by Waveshare.

You don’t need to be familiar with the electrical schematic for the Waveshare board to use it. But if you would like more insight as to what the board is doing, continue reading.

SIMCOM 7600 Pinout

A quick glance of the pinout for the SIMCOM 7600 integrated circuit gives hints at a number of interfaces the chip supports. There are pins for power control, USB data transfer, interfacing to an SD card, GNSS, Flight mode, I2C, a UART interface, functionality concerning batteries, and analog to digital conversion. While the SIMCOM 7600 supports all of these interfaces, the Waveshare 7600 board built around this supports a subset of these interfaces; some of the lines on the SIMCOM 7600 are not connected to an external interface.

The schmatic that Waveshare provides is available on their Wiki at this URL. In the center of the schematic is the SIM7600.

SIM7600CX

Looking at this part of the diagram alone, you might notice that the pins labeled SCL and SDA are both tied to the positive voltage source instead of to other circuitry. These labels are associated with an I2C interface. Since the Waveshare’s board does not bridge the I2C pins between the Jetson Nano and the SIMCOM 760X, there is no I2C that you can use. Let’s take a look at how Waveshare’s device is connected to the Jetson Nano’s 40 pin header.

Waveshare Jetson Interface

Here you can see that the Waveshare board connects to some voltage and ground lines on the 40-pin interface. The only pins related to functionality used are pins 8 and 10 on the Jetson, which connect the Jetson’s UART to the SIMCOM 7600’s UART, and pins 31 and 33 on the Jetson, which connect to pins labeled D6 and D13 on the Waveshare board. Let’s trace these lines.

The two UART lines don’t connect directly to the SIMCOM 7600X. Instead they pass through a pair of switches and an integrated circuit identified as TXB0108EPWR. According to a datasheet (PDF), this circuit is a level shifter.

This circuit isn’t providing any additional signals. It allows the SIMCOM 7600, which uses 1.8 volt signaling, to communicate with the Jetson Nano over 3.3 volt signaling. The two switches allow the SIMCOM 7600’s UART and Jetson’s UART to be disconnected from each other. It is also possible to connect directly to the 3.3volt side of this circuit through pins exposed on the board.

Waveshare UART switch

There’s a schematic for the USB port. There’s nothing significant in the USB connection that you need to know. When I first got my hands on this board I was wondering why there were interfaces for both USB and a UART. Why are there two connections? This was answered by reading the SIMCOM 7600 documentation. The 7600 can accept AT commands over either interface and perform data transfers over either interface. The USB port is set to suspend if it is not used within some time period. It becomes active again during certain wake events.

In the Waveshare Wiki, developers are instructed to set a pin of the Jetson Nano to high and then low to ensure that the Wavesare device is turned on. The following is a script that can Waveshare provides for doing this.

echo 200 > /sys/class/gpio/export 
echo out > /sys/class/gpio200/direction 
echo 1 > /sys/class/gpio200/value 
echo 0 > /sys/class/gpio200/value 

There is not an explanation given as to what this is doing. GPIO200 is connected to pin 31 on the Jetson’s 40 pin header. Pin 31 leads to a pin labeled D6 on the board. What does pin D6 do? On the schematic we find that D6 terminates on a jumper.

D6 on Schematic

Layouts on an electrical schematic don’t necessarily map spatially. Looking at the physical connector in question, we can see more of what it does.

Closeup of D6

There is a jumper that can either bridge D6 to PWR, or bridge the 5V supply to power. In the above since PWR and 5V are connected, the PWR line will always be highm and pin 31 on the Jetson Nano is free for other purposes. If D6 and PWR are bridged, then the Jetson controls the power state of the board. The SIMCOM 7600 could go to a lower power mode for a number of reasons. This could include receiving a command instructing the SIMCOM 7600 to go to a lower power state. Pulsing this line will wake the SIMCOM 7600 up. If we look a little deeper at the schematic, we find PWR does not go directly to the SIMCOM 7600. It passes through another circuit.

Circuit for PWR signal

The end of the circuit labeled POWERKEY is connected to the SIMCOM 7600’s line of the same label. According to the SIMCOM 7600 documentation, connecting the power line to ground will power the unit on. (the POWERKEY line is connected to positive 1.8V internally within the SIMCOM 7600). The end result of this circuit could almost be viewed as being like an inverter; the high voltage from D6 or PWR results in the POWERKEY pin connecting to ground (low signal) and powering on. Sending a low signal to PWR causes the POWERKEY to be driven high by it’s internal resistor, which powers the SIMCOM 7600X off.

There is a circuit labeled “Flight Mode” that does something similar. The Flight Mode circuit bridges Pin 33 from the Jetson Nano to a pin on the SIMCOM 7600X labeled “Flight Mode.” As you may have inferred from using a phone, activating Flight Mode disables the radios within the SIMCOM 7600X.

Flight Mode circuit

There are a few other components on the schematic that don’t involve any signaling with the Jetson Nano. Of course, there is also the circuit that connects to the SIM card. Waveshare has also supplied a circuit for interfacing with earphones and a microphone. To use this feature you only need to connect a headset to the jack on the board.

There are three more interfaces on the schematic for the antennas. Two of these antennas are self-explanatory. The GNSS connector for a GPS antenna. The Main connector for a cellular antenna. There is a third antenna that is labeled as AUX on the Waveshare board, and DIV Ant on the schematic. While not strictly necessary, connecting an antenna to this connector can enhance the 4G performance of the SIM 7600.

That covers all of the circuits that connect to the Waveshare 7600X 4G.

See other posts on the NVIDIA Jetson.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Shaders on Chrome and Multi-GPU Systems

I have been working with graphic shaders on Chrome. There is a lot that you can do with Shaders alone. If you are running a recent version of Chrome, have a decent GPU in your computer, and want to see what you can do with them take a look at https://shadertoy.com for samples of the type of real time graphics that can be done with them. I will not talk about how shaders work here though. I want to talk about a performance problem I encountered and how I got around it.

I usually use a 27-inch iMac running Windows for day to day work.  The computer has a GPU that was made for mobile computers. Having been manufactured back in 2014 as you might guess it is a pretty weak GPU. To address some of the shader performance problems that I encountered I tried using en eGPU (external GPU). But I did not see the performance gains that I had expected. Shader performance was even worst when I tried running shaders with Chrome on the GTX 1080 that was in the external GPU.

What was going on? I decided to look at the Chromium source code to get an idea for this. Chrom(e|ium) uses a library called Angle for its low level graphics calls. Angle abstracts away the underlying graphics API so that someone can use the same source base for more than one type of device. On Windows machines the low level graphics APIs are generically referred to as DirectX. DirectX is a family of APIs with Direct3D being the set of DirectX APIs focused on 3D graphics. Angle supports Direct3D versions 9 and 11. Chrome uses Direct3D 9 though.

Looking in the Angle source code it did not take long to find the source of my problem. The lines of interest are in the constructor and in the initialize() method. In the Constructor the member mAdapter is set to a value that is used to select the graphics adapter to be used. It is set to D3DADAPTER_DEFAULT. This value usually resolves to the adapter that has the desktop that is marked as the primary adapter. In the initialize() method this value was passed to Direct3D9::CreateDevice.   In my case this was the built-in  AMD Radeon R9 M295X. The shaders were running on this card and the output was being copied over to the GTX 1080 for display. Once I knew this getting the problem resolved was easy. I set the GTX 1080 to the primary display adapter, logged out of my computer, and logged back in. After this performance was great!

It was still possible to get bad performance though. If I moved Chrome back to the built-in display it appears that there were performance penalties from the memory being copied from the GTX 1080 back to the built in adapter. On other machines the penalties might not be as severe.

What does my setup look like? I use the Sonnet eGPU cases (I have got two) and usually have an NVidia GPU in one and an AMD GPU in the other. The Apple computer that I am using does not have the USB-Thunderbolt 3 adapter interface that is used by these cases. I must use a Thunderbolt 3 (USB-C) to Thunderbolt 2 adapter to make this work.

sonnet

HDMI Capture on the Raspberry Pi

Back in January I tweeted about an HDMI capture device that for the Raspberry Pi. I’ve only recently have gotten a chance to use it. The device, known as the “HDMI to CSI-2 module”, works with the Raspberry Pi. Overall my experience was positive, though I found that this device has limitations that, if not previously known, can result in some frustration. The device connects to the CSI-2 camera interface and presents itself as a camera. The utilities and scripts that you may have used with the Raspberry Pi also work with this device without modifications. Along with the HDMI capture module the package contains the cable needed for connecting it to the full size Raspberry Pi and a second cable for use with a Raspberry Pi Zero.

One of the first uses that came to mind with this device is that I could use camera options beyond the official Pi cameras. The camera that I have about the house produce clean HDMI signals. They already have a range of lenses, ranging from some macro lenses for pictures of small items close-up and a 2132 millimeter Schmidt–Cassegrain for astrophotography.

My smallest lens next to my largest lens. Both of which are not available for use on the Pi through my digital camera.

The first time I tried to use the capture device with one of my cameras, it didn’t work. I received a non-descriptive error that is primarily associated with non-working or improperly installed cameras.

mmal: mmal_vc_component_enabled: failed to enable component: ENOSPC
mmal: camera component couldn't be enabled
mmal: main: Failed to create camera component
mmal: Failed to run camera app. Please check for firmware updates

Thankfully, this isn’t indicative of an actual hardware failure. The capture device works with a limited set of resolutions and refresh rates. For 1080p video signals, the maximum refresh rate is 25 fps.

ResolutionRefresh Rate (fps)
720p50
72060
1080i50
1080p24
1080p25
Supported Resolutions

After making adjustments to the output settings of my camera, I was successful in using it with the HDMI capture.

The camera was the first device that came to mind, but it could work with non-camera HDMI sources too. I connected a Nintendo Switch to the device and it captured from the switch just fine. Provided that the signal is within the resolution and FPS range and is not an encrypted (HDCP) signal, it works.

Comparing the HDMI capture device to the Raspberry Pi cameras, there were a few differences to note. While it may be easy to assume that the digital photo camera paired with this device is better than the Raspberry Pi cameras, that isn’t necessarily the case. “Better” is a matter of what satisfies the requirements for a solution. If that solution requires high physical portability, the photo camera’s size could be a disadvantage. Using an external camera also ads to external power needs; the external camera will need to have it’s own battery or power supply. The official Raspberry Pi cameras run off of the Raspberry Pi’s power.

HDMI to CSI-2 Module next to Raspberry Pi Camera

The Pi cameras offer some higher resolutions than one can capture with the HDMI capture device. Resolution is an attribute of quality, but not the only metric for quality. I hesitate to label the higher resolution as higher quality because there are cases where a lower resolution camera may be rated better on other quality metrics, such as clarity or dynamic range, or may have attributes that make it a better fit for a specific application, such as a different shutter angle.

The Raspberry Pi HQ camera (recognizable from it’s C-mount for attaching a lens) can capture still photographs of up to 4056×3040 pixels. The Raspberry Pi Camera v2 captures stills at up to 3280×2464 pixels. For video, all of the cameras have the same resolution. Keep in mind though at these higher resolutions since the device is receiving stills and not video frame the rate of capture will be much lower.

ResolutionFrame Rate (fps)
1080p30
720p60
480960/90
Raspberry Pi Camera Framerates

How did it work? After trying it on a Raspberry Pi with a Nintendo Switch I would rate the capture device as being okay. It isn’t stellar, but it isn’t bad either. It provides a way to interface with HDMI sources. During the process of recording, it appeared there were frames that were dropped. The playback confirmed this. I was wondering if the dropped frames were due to the speed of the memory card in the Pi or from some computational limits on its ability to encode the video to .H264. The next thought that came to mind was to try it with the Jetson Nano. Sadly, while the Jetson Nano uses the CSI-2 interface, at the time of this writing it is not compatible with the Jetson Nano.

Google IO Conference Registration Open

For reasons I’m sure are widely known, Google will be holding its annual I/O conference this year virtually. The conference will be held from 18-20 May, 2021. Registration is free and open to all at https://g.co/io. The schedule of sessions is expected to be posted before the month of April is over.

Nvidia GPU Technology Conference, 12-16 April 2021

Registration for Nvidia’s GPU Technology Conference (GTC) is now open at no cost. From April 12 to April 16, Nvidia will be offering online presentations with an emphasis on AI applications. The presentations go into the industries of healthcare, networking, game development, robotics, and more. Over 1,600 sessions are listed in the session catalog. Much like last years conference, this conference will be going around the clock. Don’t be surprised if you see a session scheduled for 3:00AM or 10:00PM. If you don’t manage to catch a presentation live you can watch a presentation later once the recording is posted.

Register Now


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Making your Android App an Android Instant App

Android Instant Apps offer a way for users to try out your application without fully installing them. An instant app can be launched from a link. A link on a website could launch your Instant App without the user needing to check to ensure that they have the application installed first. This allows someone to experience the intended experience in only a few moments. I’m very much a proponent of Instant Apps since they potentially make it less necessary to review what apps have not been used in a while as candidates for removal to manage the storage on a device; if a system becomes low on resources, the device will removed the cached instant apps as needed. If an application is Instant App enabled, the Play Store will present both a “Try Now” and “Install” button

If an application is made of several modules, only the modules needed for the instant app to run are downloaded. This is enabled through AABs(Android Application Bundle). Later this year, in August 2021, Android apps published through Google Play must be packaged as AAB instead of an APK. A key difference in the AAB and APK is that the AAB contains the binaries and files for all variants of your application (ARM, ARM64, x86) and the layouts. Google Play will then use dynamic delivery to ensure that the components that a specific device needs are delivered to that device.

Since only the components that are needed are downloaded, the user does not have to wait on the entire application package to download for the application to open. This process is faster than downloading and installing regular applications; it is perceivably instant in some cases. Instant applications must be limited to 15 MB in size.

To use the Instant App feature, your application must support Android 5.0 at minimum. Though after November 2021 developers will be required to target Android 11. No, this doesn’t mean that support is dropped for people with older phones. An Android application’s build.gradle has both a targetSdkVersion attribute and a minimumSdkVersion attribute. The minimum version can be lower than the target version. Android 8.0 (API target 26) and higher provides some advantages when a user moves from using the Instant app to installing the application. If the user decides to install the application, this is considered an upgrade. The data that the application has stored on the user’s device will migrate to the full application. For API 25 and before the data transfer is not automatic. The Storage API will need to be used to transfer the data manually.

For much of the documentation that is available today it is suggested that when creating your Instant App that you ensure a check box is checked at the time of the application creation. Looking in Android Studio today this frequently mentioned checkbox does not exists. If you encounter this, you may be looking at documentation based on older versions of Android Studio.

In Android Studio you will want to ensure the Instant App SDK is installed. In the SDK Manager you will find it under the “SDK Tools” tab. The item is titled “Google Play Instant Development SDK.”

Create an Android application. To enable the instant app feature, a few modifications are needed. You can make these modifications manually or through a menu option. To make the change through the menu option right-click on your app’s module, select “Refactor” and then “Enable Instant Apps Support…”

Selecting this menu option makes changes to your application’s Manifest and the App level build.gradle. In AndroidManifest.xml, a new namespace is added to the root element. An item specifying a sandbox version is also added to the element.

xmlns:dist="http://schemas.android.com/apk/distribution"
android:targetSandboxVersion="2"

An additional element is added to the manifest named <dist:module /> with an attribute dist:instant set to true. You can add an optional dist:title attribute with a string that may be presented to the user to identify your application.

<dist:module
dist:instant="true"
dist:title="@string/instant_launch_title"
>

In the module’s build.gradle, a dependency is added for Google Play’s instant app services.

implementation "com.google.android.gms:play-services-instantapps:17.0.0"

While this enable’s an application for instant launch, there are other considerations that you will want to make for the best experience. This includes potentially dividing your application into modules to put the most essential features that will be available in the Instant app in a smaller module for quick launch while the other features of your application are in another module. Presently, instant apps are limited to 15 megabytes. One strategy may be having activities for viewing data in a module (so that user’s can view data that your application’s services offer) with some light-weight editors and placing a more capable editor and other application features in a different module.

There are several ways to test your Instant App. One way is through the Google Play development console. You have the option of your Instant app and the full install as being the same or separate applications. If they are separate, they don’t even need to be in the same project. They do need to use the same package name. If you decided for them to be different projects, then their version numbers must be different. The Instant App needs to have a lower version number than the full application. The transition from the instant app to the full app, should the user decide to perform an install, is treated as an upgrade.

Within the console, upload your full application as you normally would within the choses testing track. After it is uploaded, select your application from the console and select “Advance settings.” Under the tabs, select “Release Types” and then select the button to add a new release type. “Google Play Instant” is the type that you want to add.

In the development console select the option to make a new release. You will now have a drop-down where you can select the release type. Select “Google Play Instant.”

You will be prompted to select or upload an application package. If your instant application is the same as your full application, here you can select the previously uploaded AAB. Otherwise, upload the instant version of the application. After filling in the information for the release, you are done, but possibly not ready to test.

When I uploaded my first instant app, the process was a bit frustrated by not knowing that the Instant App isn’t necessarily available in the Google Play Store instantly. For me, the full application showed, but the Instant app was nowhere to be found. It can take a day (and sometimes longer) for the option to try the application to show up. Have a bit of patience here. The instant version of your application will (ironically) become available with time.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Android Studio 4.1 Development Essentials

Kotlin Programming: The Big Nerd Ranch

Invoking Win32 Functions from the Command Line

I’m testing some code that accepts some work items and performs a long running task on the work items. For this project, I generally want the screen to be locked since I’m going to be away from the computer for some time while it runs. I recently decided to make locking the screen part of the script that invokes the tasks. Yes, I could just press [Windows Key]+[L], but if there’s something that I’m doing repeatedly, I would prefer to just automate it and not worry about it.

Locking the screen from a program is easy. Making a call to the parameter-less function LockWorkStation* results is what I want. I thought I would just make a simple C++ program that does nothing more than make this call and be done with it. But something about that didn’t feel right; why make an entire program to invoke that single problem. It actually isn’t necessary to make a program to do this. Windows has a utility in the System32 folder. RunDll32.exe. The utility is specifically made for calling functions in DLLs that were written to process Windows message. If you have ever done Windows 32 programming with C++ then you are already familiar with these. Calling functions with no parameters is fine also

In the general case, Rundll32.exe accepts the name of the DLL to invoke and the name of the function within the DLL. For my need, the call looks like the following.

Rundll32.exe user32.dll,LockWorkStation

Win32 functions generally have the following call signature.

void CALLBACK funcName(
       HWND      hwnd,
       HINSTANCE hinst,
       LPSTR     lpszCmdLine,
       int       nCmdShow
);

RunDLL32 will take care of converting arguments from strings on the command line to the data types being passed. Be careful about using this utility; if you pass bad values, expect bad results. For the following, I passed the name of an HTML file from the command prompt and a print dialog opened for printing it out.

rundll32 mshtml.dll,PrintHTML "A title for my document", "C:\temp\map.html"

While I find this to be a useful utility, I don’t recommend it for anyone that isn’t already familiar with calling Win32 functions.

* – the command tsdiscon has a similar affect. It disconnects the current session from the graphical desktop. But when I use this command logging back in takes much longer and I prefer not to use it.

“listen EACCES: permission denied 127.0.0.1:443”

After some holiday time off I returned to a work project that uses Angular, started it up, and got this error.

An unhandled exception occurred: listen EACCES: permission denied 127.0.0.1:443

I’ve seen this error before, but did not immediately realized what caused it. It took me a few minutes to recall the source of the problem. This error occurs when there is another application that is already using the port that Angular is trying to open. In my case, it was a VMWare service that was occupying the port. I stopped the service and my project started up. If it happened to you, how would you know what process is using the port?

On Windows, you can list which processes are using which port with the following command

netstat -aon

You’ll get a full list of ports, addresses, and process IDs.

Active Connections
 Proto  Local Address          Foreign Address        State           PID
   TCP    0.0.0.0:80             0.0.0.0:0              LISTENING       4
   TCP    0.0.0.0:135            0.0.0.0:0              LISTENING       1348
   TCP    0.0.0.0:443            0.0.0.0:0              LISTENING       38880
   TCP    0.0.0.0:445            0.0.0.0:0              LISTENING       4
   TCP    0.0.0.0:902            0.0.0.0:0              LISTENING       39848
   TCP    0.0.0.0:912            0.0.0.0:0              LISTENING       39848
   TCP    0.0.0.0:2179           0.0.0.0:0              LISTENING       2304
   TCP    0.0.0.0:2869           0.0.0.0:0              LISTENING       4
   TCP    0.0.0.0:3389           0.0.0.0:0              LISTENING       1600
   TCP    0.0.0.0:5040           0.0.0.0:0              LISTENING       884
   TCP    0.0.0.0:5800           0.0.0.0:0              LISTENING       5252
   TCP    0.0.0.0:5900           0.0.0.0:0              LISTENING       5252
   TCP    0.0.0.0:7680           0.0.0.0:0              LISTENING       37976
   TCP    0.0.0.0:27036          0.0.0.0:0              LISTENING       17928
   TCP    0.0.0.0:44367          0.0.0.0:0              LISTENING       4
   TCP    0.0.0.0:49664          0.0.0.0:0              LISTENING       704

If you wanted to filter those results, you can pass the output through “findstr” using a port number as the string to filter by.

C:\Users\Joel>netstat -aon | findstr 443
   TCP    0.0.0.0:443            0.0.0.0:0              LISTENING       38880
   TCP    0.0.0.0:44367          0.0.0.0:0              LISTENING       4
   TCP    192.168.1.81:49166     72.21.81.200:443       TIME_WAIT       0
   TCP    192.168.1.81:49169     64.233.177.101:443     TIME_WAIT       0
   TCP    192.168.1.81:49206     13.249.111.97:443      ESTABLISHED     24324
   TCP    192.168.1.81:49209     52.167.253.237:443     ESTABLISHED     1996
   TCP    192.168.1.81:49220     52.184.216.246:443     ESTABLISHED     37976
   TCP    192.168.1.81:49222     168.62.57.154:443      ESTABLISHED     24324
   TCP    192.168.1.81:49224     52.114.74.45:443       ESTABLISHED     13304
   TCP    192.168.1.81:49227     52.113.194.132:443     ESTABLISHED     10376
   TCP    192.168.1.81:49228     184.24.37.85:443       ESTABLISHED     27828
   TCP    192.168.1.81:49231     13.92.225.245:443      ESTABLISHED     27828
   TCP    192.168.1.81:49233     140.82.113.3:443       ESTABLISHED     24324
   TCP    192.168.1.81:49234     20.190.133.75:443      ESTABLISHED     39168
   TCP    192.168.1.81:49236     204.79.197.203:443     ESTABLISHED     27828
   TCP    192.168.1.81:49238     52.96.104.18:443       ESTABLISHED     12440
   TCP    192.168.1.81:49239     52.96.104.18:443       ESTABLISHED     12440

You will be more interested in matches from the left column, since that is the port number being used on your machine. Right now, I can see that on my machine the process occupying port 443 is process 38,880. Great, I have a process number. But what can I do with it. There is another command named “tasklist” that list processes names and their process ID. Combined with findstr, I can get the name of the process using the specific port.

C:\Users\Joel>tasklist | findstr 38880
 vmware-hostd.exe             38880 Services                   0     32,084 K

Video Streaming with Node and Express

I’ve got a range of media that I’m moving from its original storage to hard drives. Among this media are some DVDs that I’ve collected over time. It took a while, but I managed to convert the collection of movies and TV shows to video files on my hard drive. Now that they are converted, I wanted to build a solution for browsing and playing them. I tried using a drive with DLNA built in, but the DLNA clients I have appear to have been built with a smaller collection of videos in mind. They present an alphabetical list of the video files. Not the way I want to navigate.

I decided to instead make my own solution. To start though, I wanted to make a solution that would stream a file video file. Unlike most HTML resources, which are relatively small, video files can be several gigabytes. Rather than have the web server present the file in its entirety I need for the web server to present the file in chunks. My starting point is a simple NodeJS project that is presenting HTML pages through Express.

const express = require('express');
const fileUpload = require('express-fileupload');
const session = require('express-session');
const bodyParser = require('body-parser');
const createError = require('http-errors');
const path = require('path');
const { uuid } = require('uuidv4');
require('dotenv').config();

var sessionSettings = {
   saveUninitialized: true,
   secret: "sdlkvkdfbjv",
   resave: false,
   cookie: {},
   unset: 'destroy',
   genid: function (req) {
      return uuid();
   }
}

app = express();
app.use(session(sessionSettings));
if (app.get('env') === 'production') {
   app.set('trust proxy', 1);
   sessionSettings.cookie.secure = true;
}
app.use(express.static('public'));
app.use(bodyParser.json());
app.use(fileUpload({
   createPath: true
}));
app.use(function (req, res, next) {
   console.log(req.originalUrl);
   next(createError(404));
});
app.set('views', path.join(__dirname, 'views'));
app.engine('html', require('ejs').renderFile);
app.set('view engine', 'html');

module.exports = app;

With the above application, static content any files that are put in the folder named “public” will be served when requested. In that folder, the stylesheet, JavaScript, HTML, and other static content will be placed. The videos will be in another folder that is not part of the project. The path to this folder is specified by the setting VIDEO_ROOT in the .env file.

For this to stream files, there are two additional routes that I am going to add. One route will return a list of all of the video IDs. The other route will return the video itself.

For this first iteration for video streaming, I’m going to return file names as video IDs. At some point during the development of my solution this may change. But for testing streaming the file name is sufficient. The route handler for the library will get a list of the files and return it in a structure that is marked with a date. The files it returns are filtered to only include those with an .mp4 extension.

const fs = require('fs');
const express = require('express');
require('dotenv').config();

var router = express.Router();

var fileInformation = { 
    lastUpdated: null, 
    fileList: []
}

function isVideoFile(path) { 
    return path.toLowerCase().endsWith('.mp4')||path.toLowerCase().endsWith('.m4v');
}

function updateFileList() { 
    return new Promise((resolve,reject)=> {
        console.log('getting file list');
        console.log([process.env.VIDEO_ROOT])
        fs.readdir(process.env.VIDEO_ROOT, (err, files) => {
            if(err) reject(err);
            else {
                var videoFiles = files.filter(x=>isVideoFile(x));
                fileInformation.fileList = videoFiles;
                fileInformation.lastUpdated = Date.now();
                resolve(fileInformation);
            } 
        });
    });
}

router.get('/',(req,res,next)=> {
    console.log('library')
    updateFileList()
    .then(fileList => {
        res.json(fileList);
    })
    .catch(err => {
        console.error(err);
        res.status(500).json(err)
    });
});

module.exports = router;

The video element in an HTML page will download a video in chunks (if the server supports range headers). The video element sends a request with a header stating the byte range being requested. In the response, the header will also state the byte range that is being sent. Our express application must read the range headers and parse out the range being request. The range header will contain a starting byte offset and may or may not contain an ending byte offset. The value in the content range may look something like the following.

byte=0-270
byte=500-

In the first example, there is a starting and ending byte range. In the second, the request only specifies a starting byte. It is up to the server to decide how many bytes to send. This header is easily parsed with a couple of String.split operations and integer parsing.


function getByteRange(rangeHeader) {
    var byteRangeString = rangeHeader.split('=')[1];
    byteParts = byteRangeString.split('-');
    var range = [];
    range.push(Number.parseInt(byteParts[0]));
    if(byteParts[1].length == 0 ) {
        range.push(null);
    } else {
        range.push(Number.parseInt(byteParts[1]))
    }
    return range;
}

There is the possibility that the second number in the range is not there, or is present but is outside of the range of bytes for the file. To handle this, there’s a default chunk size defined that will be used when the byte range is not specified. But the range is also checked against the file size and clamped to ensure that there is no attempt to read past the end of the file.

const CHUNK_SIZE = 2 ** 18;
//...
var start = range[0];
if(range[1]==null)
    range[1] = Math.min(fileSize, start+CHUNK_SIZE);
var end = range[1] ;
end = Math.min(end, fileSize);

In the response, the header contains a header defining the range of bytes in the response and it’s length. We build out those headers, set them on the response header, and then write the range of bytes. To write out the bytes, a read stream from the video file and piped to the response stream.

const contentLength = end - start + 1;
const headers = { 
    "Content-Range": `bytes ${start}-${end}/${fileSize}`,
    "Accept-Ranges":"bytes",
    "Content-Length": contentLength,
    "Content-Type": getContentType(videoID)
};

console.log('headers', headers);
res.writeHead(206, headers);
const videoStream = fs.createReadStream(videoPath, {start, end});
videoStream.pipe(res);

The server can now serve video files for streaming. For the client side, some HTML and JavaScript is needed. The HTML contains a video element and a <div/> element that will be populated with a list of the videos.

<!DOCTYPE html>
<html>
    <head>
        
        <link rel="stylesheet" href="./style/main.css" />
        <script src="scripts/jquery-3.5.1.min.js"></script>
        <script src="scripts/main.js"></script>
    </head>
    <body>
        <div id="videoBrowser" ></div>
        <video id="videoPlayer" autoplay controls></video>
    </body>
</html>

The JavaScript will request a list of the videos from the /library route. For each video file, it will create a text element containing the name of the video. Clicking on the text will set the src element on the video.

function start() { 
    fetch('/library')
    .then(data=>data.json())
    .then(data=> { 
        console.log(data);
        var elementRoot = $('#videoBrowser');
        data.fileList.forEach(x=>{
            var videoElement = $(`<div>${x}</div>`);
            $(elementRoot).append(videoElement);
            $(videoElement).click(()=>{
                var videoURL = `/video/${x}`;
                console.log(videoURL);
                $('#videoPlayer').attr('src', videoURL );
            })
        });
        
    });
}

$(document).ready(start());

Almost done! The only thing missing is adding these routes to the source of app.js. As it stands now, app.js will only serve static HTML file.

const libraryRouter = require('./routers/libraryRouter');
const videoRouter = require('./routers/videoRouter');
app.use('/library', libraryRouter);
app.use('/video', videoRouter);

I started the application (npm start) and at first, I thought that the application was not working. The problem was in the encoding of the first MP4 file that I tried. There are a range of different video encoding options that one can use for MP4 files. Looking at the encoding properties of two MP4 files (one file streamed successfully, the other did not) there was no obvious difference at first.

The problem was with metadata stored in the file. A discussion of video encodings is a topic that could be several posts of its own. But the short explanation is that we need to ensure that the metadata is at the begining of the file. We can use ffmpeg to write a new file. Unlike the process of re-encoding a file, for this process the video data is untouched. I used the tool on a movie and it completed within a few seconds.

./ffmpeg  -i Ultraviolet-1.mp4  -c copy -movflags faststart Ultraviolet.mp4

With that change applied, I the videos now stream fine.

If you would like to try this code out, it is available in GitHub at the following URL.

https://github.com/j2inet/VideoStreamNode

Creating Development Certificates for Samsung Tizen TVs

Whether you are developing for a consumer Samsung TV or for one of the commercial SSSP displays you’ll need to have a development certificate for your code to run. There is a difference in how the certificate is created for the commercial and consumer displays. But the process is similar off the same for both.

To get started you’ll need to already have Tizen Studio installed. Open the Tizen Studio package manager and make sure that you have the following components installed.

  • Samsung Certificate Extensions

If you don’t already have the component installed select it for installation. You’ll also need to have the SDK component installed for the version of Tizen that you are targeting (ex: “5.0 TV”).  Once the component is present start the Tizen Studio Device Manager.

The device manager will be used to get the device’s ID (DUID) for consumer TVs and for installing the development certificate onto the display. For these steps to work the TV must have development mode enabled and must be set to accept development requests from the same IP address as your development machine; it will refuse request from other addresses.  If you haven’t already enabled development mode I have another posts on how to do that
here
.

In the device manager there is an icon in the upper right corner of a phone connected a computer. Select this icon. It is for establishing connections to the device manager. In the window that opens you will see a list of devices that you’ve previously connect to. If the IP address of your display is there you can click on the icon of the on/off switch to reconnect to it. If the IP address of your display is not present click on the + icon to add it. When adding you can give the TV a descriptive name, enter the IP address, and the port on which to connect (usually 26101). Click on OK to return to the main Device Manager user interface and you should see your display connected. Right-click on the display and select DUID to see the ID of the display. Go ahead and copy it to the clipboard.  You will need it in later on.  If you have multiple displays for which you will develop  repeat the same steps to collect the DUID values for the other displays and save them to a text document.  Note that if you have both consumer and commercial displays that the DUIDs for them cannot be used  mixed with each other. You can perform the following steps for all of your consumer displays at once and then all of your commercial displays at once.

Open the Certificate Manager.  When it is opened for the first time you may be asked to select a location from which you want to import certificate profiles. Select Cancel here.  You will need to create both an Author certificate and a Distributor certificate. Click on the + icon in the upper right corner to start the process of creating a new certificate. What you select on the window that appears is dependent on the type of display for which you are developing.

TizenCertificateTypeSelection

Commercial (SSSP) Display Steps

For the commercial displays select “Tizen. ” In the next step you’ll be asked to enter a name for the certificate profile. If you develop for other device types (such as the mobile device, watch, or the consumer displays) you’ll need to have more than one certificate profile. It will be good for them to have easily identifiable names.  Enter a name here that let’s you know that this is a certificate for developing for a commercial display and select Next.

TizenEnteringCertificateProfileName

Next you must select an author certificate. If you’ve created an author certificate before you have the option to select it. If not then select the option to create a new one. I’ll assume that an author certificate has not been created yet. The minimal amount of information that you need for an author certificate is a name, a password for the certificate (don’t forget this password!). You can optionally enter your country code, State, City, Organization, department, and an e-mail address and a filename in which the key file for the certificate will be saved. Enter your options and select “Next”

TizenEnteringCertificateAuthorData

The last selection to make is whether you want to use the default Tizen distributer certificate . While this selection will allow you to submit mobile applications to the Tizen store it is fine for our purposes. Select it and click on “Finish.” With this you have a

TizenDistributerCertificateType

Consumer Display Steps

For the consumer displays when asked for the certificate type select “Samsung”.

TizenCertificateTypeSelection

On the next screen you’ll be asked for the device type. Select “TV.”

TizenDCertificateeviceType

Enter a name for the profile and select next.

TizenCertificateProfileName

Next you’ll select an author certificate. If you already have an author certificate that you’d like to use  you can select it here. If you would like to create a new certificate (which you would do if you’ve never created one before) select the first option. You would also select this option if you had a certificate but it has expired. If you had a certificate that has expired you may want to select the option to create a new certificate and check the box that says “Use an Existing Certificate.” If you have an application that has been published to the Tizen store before and are creating a new certificate then you’ll want to use this option since an application’s ID is in part based on the certificate with which it was signed.

TizenAuthorCertificateInformation

Enter the your author information. Remember what your password is, especially if you plan to publish your application under this certificate. When you click on “Next” you’ll be asked to sign into your Samsung account. After signing in your Author certificate is created.

You’ll be presented with the option of backing up your certificate. While this isn’t required it is strongly encouraged. You will want to keep this secure as it forms part of the identity for your apps. But you are almost done. You need a distributor certificate

TizenBackupCertificate

On the next screen you are prompted to either create a new distributor certificate or select an existing one. Choose the option to create a new one.

TizenNewDistributorCert

Now it is time to use the DUID that you copied earlier. If it is already on your clipboard it will automatically be pasted into one of the entries for DUID. You also have the option to change the privilege level, but not really. The two privileges available are “Public” and “Partner.” Partner gives you application to functionality that isn’t available to everyone. But to use Partner level privileges they have to be granted to you by Samsung.

TizenEnterDUID

After you click on “Next” you’ll be greeted with a confirmation that the certificate has been created along with the path to the certificate being shown.

TizenCertificateCreationComplete

For Both Consumer and Commercial

Now that your certificates have been created you need to let the display know about it so that it can recognize applications that were signed with your certificate and allow them to run. To do this return to the device manager. Right-click on the your display in the device manager and select “Permit to install apps.” The display is ready to accept applications now.

Switching Certificate Profiles

If you are developing for more than one type of Tizen device you’ll probably have to change which certificate profile that you are using as you change which platform you are working on. When you need to change profile open the certificate manager. You will see a list of the profiles that you’ve set up and a check-mark next to one marking it as the active profile. If you want to change which profile is active select it from the list and click on the check mark in the upper right corner.

With the certificate created and selected you can now move forward with deploying an application to the display. Start off with a hello world program just to see that it works.

51thAkzZ9BL._SL160_

Tizen Compatible TV