Avoiding Unnecessary File Downloads While Syncing

I had the opportunity to revisit an old project that was created for a client. The initial release of this project had a program that was syncing content from a CMS. It was made to only download content that had been downloaded since the last time it synced. For some reason, it was now always downloading all of the files instead of only the ones that change. Looking into the problem I found that changes in the CMS resulted in files no longer having ETAG headers, which are used to tell if a file has changed since the last time it was requested. The files still had a header indicating a last updated date. It is easy enough to use that header instead. But the client had enough requests for changes to justify writing a new syncing component; they had a new CMS with different APIs. File syncing isn’t complex, I could rewrite the component easily in an evening. I decided to write the new version of the component using .Net 6.0.

Before downloading a file, I need to check the attributes of the file on the server end without starting the transfer of the file itself. The HTTP verb for obtaining this information is HEAD. The HEAD verb will return the headers for the resource identified by the URI, but it doesn’t return the resources data stream itself. As a quick test, I grabbed the URL for an MP3 player I keep seeing in an Amazon advertisement. https://m.media-amazon.com/images/I/61TUVbqPhLL.AC_SL1500.jpg.

I used Postman to request the image at the URL and examined the headers. Postman will perform a GET request by default. Changing the request from GET to HEAD results in a response with no body, but has headers. This is exactly what we want!

There are a couple of things that we will need to do with this information. We will need to save it somewhere for future use. When we make future requests, we need to use this information to filter what data we transfer. The filtering can be done on the client side within the logic of the program making the request, or it can be performed on the server side by adding an additional header to the request named If-Modified-Since. Providing a date in this header will cause the server to either send the new resource (if it is more recent than the date in this parameter) or it will return header information only (if the server version is not more recent than the date specified). The date must be in a specific format. But if you are saving the original date response, then you can use it as it was received.

Let’s jump into actual code. I’ve made a data class that stores information about the files I will be downloaded.

namespace FileSyncExample.ViewModels
{
    public class FileData: ViewModelBase
    {
        private DateTimeOffset? _serverLastModifiedDate;
        [JsonProperty("last-modified")]
        public DateTimeOffset? ServerLastModifiedDate
        {
            get => _serverLastModifiedDate;
            set => SetValueIfChanged(() => ServerLastModifiedDate, () => _serverLastModifiedDate, value);
        }

        public string _fileName;
        [JsonProperty("file-name")]
        public string FileName
        {
            get => _fileName;
            set => SetValueIfChanged(() => FileName, () => _fileName, value);
        }

        private string _clientName;
        [JsonProperty("client-name")]
        public string ClientName
        {
            get => _clientName;
            set => SetValueIfChanged(()=>ClientName, () => _clientName, value);
        }

        private bool _didUpdate;
        [JsonIgnore]
        public bool DidUpdate
        {
            get => _didUpdate;
            set => SetValueIfChanged(()=>DidUpdate, ()=>_didUpdate, value);
        }
    }
}

I’m using this for two purposes in this example program. I’m both building a download list with it and am using it to save metadata. In the real program, this list is made using a query to the CMS. I create a list of these objects with the identifiers.

        public MainViewModel()
        {
            Files.Add(new FileData() { FileName= "61lLJ85GYXL._AC_SL1000_.jpg" });
            Files.Add(new FileData() { FileName= "61qfFAQ3xKL._AC_SL1500_.jpg" });
            Files.Add(new FileData() { FileName= "71PKvcmV6DL._AC_SX679_.jpg" });
            Files.Add(new FileData() { FileName= "71fOsWX9qlL._AC_UY327_FMwebp_QL65_.jpg" });
        }

All of these images are coming from Amazon. The full URL to the data stream is built by prepending the file name. I do this through a string format.

var requestUrl = $"https://m.media-amazon.com/images/I/{file. Filename}";

For the download, I am using the HttpClient. It accepts a request and returns the response.

HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.ConnectionClose = true;

For now, let’s code for a single scenario; there are no files already downloaded. We wish to do our priming download and save the file’s data and the metadata about the file. To keep the file system clean instead of placing the metadata in a separate file I’m saving it in an alternative data stream. This only works on NTFS file systems. If you would like to learn more about that read here. The significant parts of the code to perform the download follows.

var requestUrl = $"https://m.media-amazon.com/images/I/{file.FileName}";
var request = new HttpRequestMessage(HttpMethod.Get, requestUrl);
var response = await client.SendAsync(request);
var lastModified = response.Content.Headers.LastModified;
if(lastModified.HasValue)
{
    file.ServerLastModifiedDate = lastModified;
}
try
{
    response.EnsureSuccessStatusCode();
    using (FileStream outputStream = new FileStream(Path.Combine(Settings.Default.CachePath, file.FileName), FileMode.Create, FileAccess.Write))
    {
        var data = await response.Content.ReadAsByteArrayAsync();
        outputStream.Write(data, 0, data.Length);
    }
    //Putting the metadata in an alternative stream named meta.json
    var fileMetadata = JsonConvert.SerializeObject(file);
    Debug.WriteLine(fileMetadata);
    var metaFilePath = Path.Combine(Settings.Default.CachePath, $"{file.FileName}:meta.json");
    var fileHandle = NativeMethods.CreateFileW(metaFilePath, NativeConstants.GENERIC_WRITE,
                        0,//NativeConstants.FILE_SHARE_WRITE,
                        IntPtr.Zero,
                        NativeConstants.OPEN_ALWAYS,
                        0,
                        IntPtr.Zero);
    if(fileHandle != IntPtr.MinValue)
    {
        using(StreamWriter sw = new StreamWriter(new FileStream(fileHandle, FileAccess.Write)))
        {
            sw.Write(fileMetadata);
        }
    }

}
catch(Exception exc)
{

}

After running the program, the images show in my download folder. When I open PowerShell and check the streams, I see my alternative data stream present.

Printing out the data in one of the alternative data streams, I see the data in the format that I expect.

PS C:\temp\streams> Get-Item .\61lLJ85GYXL._AC_SL1000_.jpg | Get-Content -Stream meta.json

{"_fileName":"61lLJ85GYXL._AC_SL1000_.jpg","last-modified":"2019-10-30T16:28:38+00:00","file-name":"61lLJ85GYXL._AC_SL1000_.jpg","client-name":"j2i.net"}

PS C:\temp\streams>

Next, we want to modify the program to load this metadata if it exists and grab the LastModified property. This is all we need. We are going to use this information to detect if the file has been modified.

void RefreshMetadata()
{
    DirectoryInfo cacheDataDirectory = new DirectoryInfo(Settings.Default.CachePath);
    if (!cacheDataDirectory.Exists)
        return;
    foreach(var file in Files)
    {
        var fileInfo = new FileInfo(Path.Combine(cacheDataDirectory.FullName, file.FileName));
        if (!fileInfo.Exists)
            continue;
        //Great! The file exists! Let's load the metadata for it!
        var metaFilePath = $"{fileInfo.FullName}:meta.json";
        var fileHandle = NativeMethods.CreateFileW(metaFilePath, NativeConstants.GENERIC_READ,
                            0,//NativeConstants.FILE_SHARE_WRITE,
                            IntPtr.Zero,
                            NativeConstants.OPEN_ALWAYS,
                            0,
                            IntPtr.Zero);
        using (StreamReader sr = new StreamReader(new FileStream(fileHandle, FileAccess.Read)))
        {
            var metaString = sr.ReadToEnd();
            var readFileData = JsonConvert.DeserializeObject<FileData>(metaString);
            file.ServerLastModifiedDate = readFileData.ServerLastModifiedDate;
        }

    }
}

The previous code that we wrote needs a few changes. If the file being downloaded has a last modified date, add that to the request in a header field named If-Modified-Since. Thankfully, .Net can convert the DateTimeOffset object to the string format that we need for the request.

 if(file.ServerLastModifiedDate.HasValue)
 {
     request.Headers.Add("If-Modified-Since", file.ServerLastModifiedDate.Value.ToString("R"));
 }

When the response comes back, we must examine the response code. If the file has been updated the response code will have a response code of 200 (OK). This is the normal response code that we get when we first access a file. If the file has not been updated since the value we pass in If-Modified-Since the response code will be 304 (not modified). The response will have no content. We can move on from this file.

var response = await client.SendAsync(request);
if(response.StatusCode == System.Net.HttpStatusCode.NotModified)
{
    continue;
}

I can’t modify the images on Amazon for testing the behaviour of the app when the image is updated. If you want to test that, you will have to modify the sample program to point to a set of images that you can control to test that out. The NodeJS based http-server utility is useful here if you want to use a random set of images on your local computer for this purpose.

As always, the code for this post is available on GitHub. You can find it in the following repository.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Kerbal Space Program 2 Available February 23, 2023

I don’t write much about games (though I am taking that into consideration), but I thought this game demands a mention. Kerbal Space Program is a game series that is about running your own space program. The game has an especially flexible system for designing and launching vehicles for travel on land, sea, air, and space. Unlike many other games, Kerbal Space Program uses a physics system that follows a lot of concepts in astrodynamics. If you ever wanted to learn about orbital mechanics, KSP is a great testing ground for learning. KSP places the player in a scaled-down solar system to explore.

KSP 2 builds upon KSP with additional customization, UI updates, and adds interstellar travel and better support for building space bases and colonies on other planets. In later releases of the game, the game’s maker plans to add support for multiplayer. The game is going to be released as a preview. This is similar to the path that the original KSP took, with frequent updates based on the feedback from the players.

Humble Bundle Developer Book Offer

Humble Bundle is known for offering games where you can decide on the price that you pay, and the money goes to a charity. One of the offers that they have available now is for a collection of 27 books from Packt Publishing on software development and related topics. For donating 18 USD the entire collection of 27 books is available. If you donate less than this, then a subset of the books are available to you. For 10 USD, a ten item bundle is available. For 1 USD, a 3 item bundle is available. (Note, the books available for the diminished selections are preselected and cannot be changed). Many of the available books are in the topic domains of C++ and Java and range from beginning to advanced. There are a few books on Python, Go, and discrete mathematics. At the time of this posting, the “Programming Mega Bundle” is available for another 14 days. The books are available in PDF, ePUB, and MOBI formats.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

DART Mission Successful

In the interest of ensuring humanity doesn’t follow the pathway of the dinosaurs, NASA engaged in a mission known as DART recently. The purpose of the DART mission was to determine if it was possible to change the trajectory of an asteroid to prevent it from impacting earth. The asteroid chosen wasn’t endangering earth and was selected only for testing. The pair of asteroids observed are named Didymos and Dimorphus. Didymous completed orbit around Dimorphus every 11 hours and 23 minutes. If NASA successfully affected the trajectory of Didymous, they expected it to alter the orbit by about 10 minutes. The effect that the impact had was that it altered the orbital period by 32 minutes. This makes for the first time that humans have altered the orbit of a celestial body.

The orbit was altered by impacting a space vehicle into the asteroid at a speed of 22,530 kilometers per hour. Of course this destroyed the spacecraft itself. Though the mission was successful, the observations are ongoing. In another 4 years, the ESA (European Space Agency) has a fly-by planned to collect more information.

In the event of a threatening asteroid, the expectation on how it will be altered is that if it is discovered early enough, that an impactor flying into it could alter its trajectory enough so that it is not a threat to life here on earth.

References:

CNN
NPR
Fox News


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Samsung Developer Conference 2022

Wednesday, Samsung held its 2022 developer conference. A standout attribute of this conference is they invited people to attend in person; something I’ve not really seen in developer conferences since 2019 (for obvious reasons🦠). Ofcourse, for those that cannot attend, many aspects of the conference were also streamed from https://samsungdeveloperconference.com and from their YouTube channel (https://www.youtube.com/c/SamsungDevelopers ).

Concerning the content, the conference felt a bit heavier on items of consumer interest. The keynote highlighted Know Matrix, Samsung’s block-chain based solution for security among their devices (not just phones), Samsung TV Plus, Gaming, Tizen, and more.

The sessions for the conference were available either as prerecorded presentations, or live sessions. The prerecorded sessions were made available all at once .

Android

In addition to making updates to their interface (One UI, coming to the S2022 series at the end of the month) Samsung is adding a Task Bar to the Tab S8 and their foldable phones. Samsung also covered support for multitasking; Samsung’s phones support running 2 or 3 applications simultaneously. Many of the multitasking features use standard Android APIs. Samsung has also made available a task bar on their larger screen devices (tablets, foldable phones) to enable switching applications without going to the home screen or task switcher. There ar multiple levels of support that an application could have for multi-window capable devices. One is simply supporting the window being resized. FLAG_ACTIVITY_LAUNCH_ADJACENT indicates that an application was designed for a multi-window environment. New interactions enabled by multi-window applications includes drag-and-drop from one instance to another, multiple instances of an application, and supporting “flex mode” (where either side of a foldable device is used for different purposes).

Some well-known applications already support features for these environments, including Gmail, Facebook, Microsoft Outlook, and Tik-Tok.

Presentations

Multitasking Experiences
LE Wireless Audio

Tizen

It’s been 10 years since Tizen was released in 2012. In previous years, has presented Tizen as its operating system for a wide range of devices. The OS could be found running on some cameras, phones, TVs, and wearables. The Tizen OS got a great footing in TVs; you’ll find it on all of the Samsung TVs available now above a certain size, some computer monitors, and a few TVs from other manufacturers. Its presence on other devices has also diminished, with Samsung’s wearables now using Android Wear and the Tizen phones being out of production. I encountered some of the “Tizen Everywhere” marketing, but it now appears to refer to the wide range of displays that use Tizen.

One of Samsung’s presentations concerning Tizen had its own timeline of Tizen’s evolution. I might make my own, since I’ve been interested since it was in its proto-version (Bada). Samsung announced Tizen 7.0. The features highlighted in the release were in the areas of

  • OpenXR runtime
  • Real-time Kernel
  • 3D Rendering enhancements
  • Android HAL support
  • Cross-platform improcement
  • Natural User Interface Enhancements

I personally found the natural user interface enhancements to be interesting. It included a lot of AI driven features. Support for running Tizen applications on Android was also mentioned. I’m curious as to what this means though. If typical Samsung Android devices can run Tizen, then it gives the OS new relevance and increases the strength of the “Tizen Everywhere” label. Tizen has been updated to use more recent Chromium release for its Web engine. Tizen also has support for Flutter. Support was actually released last year. But compatibility and performance are increased with Tizen 7.0.

Samsung has also exposed more Native SDKs in Tizen 7.0 to C# and C from other SDKs. For .Net developers, Tizen 7.0 has increased MAUI support.

Presentations

What’s new in Tizen
Tizen Everywhere

Samsung TV Plus

This is Samsung’s IPTV service. It is integrated into the TV in such a way that it is indistinguishable from OTA channels. Entities interested in the services that this has to offer are most likely Advertisers. Samsung provided information on both making available one’s video content on Samsung TV and how to monetize it. While I don’t see myself as one that would be implementing features related to this, I did find the presentation interesting. Before a show airs (about 5 minutes before) the ad slots are available to advertisers to fill. The ad inventory is auctioned off.

Presentations

Samsung TV Plus
Home Connectivity Alliance

Gaming

The TVs support being paired with a Bluetooth controller and streaming games through the Samsung Gaming Hub. HTML-based games are served to the phone via what Samsung calls Instant Play. Samsung also showed off the features it’s made available for emersive audio within gaming environments.

Presentations

Dolby Atmos with Games
Immersive Experiences on Big Screens

Health

Samsung says they worked with Google to come up with a single set of APIs that developers can use for health apps. Often times, Samsung begins developing for some set of hardware features and later Samsung and Google normalize the way of interacting with those features. I thought these sessions would be all about Samsung Health (the application that lets you log your health stats on the phones). But the development also included their large screen (TV) interfaces with enhancements for tele-health visits. Collection of health related data has been enhanced on the Galaxy Watch 5.One of the enhancements is a microcontroller dedicated to collecting health data while the CPU sleeps. This allows the watch to collect information with less demands on the battery. The new watch is also able to measure body composition through electrical signals.

Presentations

TeleHealth in Samsung Devices
Expand Health Experiences with Galaxy Watch

IoT

Samsung’s SmartThings network now also includes the ability to find other devices and even communicate data to those devices. Like other finding networks, their solution is based on devices being able to communicate with each other. Devices can send two bytes of data through the network. How this two bytes is used it up to the device. 2 bytes isn’t a lot. But it still could be of utility, such as a device sending a desired temperature to a thermostat, or another device simply signaling “I’m home.”

Presentations

SmartThings FindMy
Home Connectivity Alliance

Other Sessions

There were plenty of other topic areas covered. I’ve only highlighted a few areas. If you would like to see the presentations for yourself visit the YouTube Channel or see the Samsung Developer’s Conference page.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Image Maps Made for Creatives

Many of the people with which I work are classified as being technical or creative (though there is a spectrum between these classifications). On many projects, the creative workers design UIs while the technical people transform those designs into something that is working. I’m a proponent of empowering those that are creating a design with the ability to implement it. This is especially preferred on projects where a design will go through several iterations.

I was recently working on a project for which there would be a menu with a map of a building. Clicking on a room in the map would take the user to web page that had information on the room. I had expected the rooms on the map to generally be rectangular. When I received the map, I found that many of the rooms had irregular shapes. HTML does provide a solution for defining shapes within the image that are clickable through Image Maps. I’ve never been a fan of those, and for this specific project I would not be able to ask the creatives to update the image map. I decided on a different solution. I can’t show the picture of the map that was the image being displayed. As an example, I’ll use a picture of some lenses that are sitting in the corner of my room.

Collection of Lenses

Let’s say I wanted someone to be able to click on a lens and get information about them. In this picture, these lenses overlap. Defining rectangular regions isn’t sufficient. I opened the picture in a paint program and applied color in a layer over the objects of interest. Each color is associated with a different object classification. Image editing isn’t my skill though. The result looks rough, but sufficient. This second image will be used in an HTML page to figure out which object that someone has clicked on. I’ll have a mapping of these color codes to objects.

When a user clicks on the real image, the pixel color data is extracted from the associated image map and converted to a hex string. To extract the pixel data, the image map is rendered to a canvas off-screen. The canvas’s context exposes methods for accessing the pixel data. The following code renders the image map to a canvas and sets a variable containing the canvas 2D context.

function prepareMap(width, height) {
    var imageMap = document.getElementById('target-map');
    var canvas = document.createElement('canvas');
    canvas.width = width;
    canvas.height = height;
    var canvasContext = canvas.getContext('2d');
    canvasContext.drawImage(imageMap, 0, 0, imageMap.width, imageMap.height);
    areaMapContext = canvasContext;
}

I need to know the position of the image relative to the browser’s client area. To retrieve that information, I have a method that recurses through the positioning containers for the image and accumulates the positioning settings to a usable set of coordinates.

function FindPosition(oElementArg) {
    if (oElementArg == undefined)
        return [0, 0];
    var oElement = oElementArg;
    if (typeof (oElement.offsetParent) != "undefined") {
        for (var posX = 0, posY = 0; oElement; oElement = (oElement.offsetParent)) {
            posX += oElement.offsetLeft;
            posY += oElement.offsetTop;
        }
        return [posX, posY];
    }
    return [0.0];
}

The overall flow of what happens during a click is defined within mapClick in the example code. To convert the coordinates on which someone clicked (relative to the body of the document) to coordinates relative to the image, I only need to subtract the offsets that are returned by the FindPosition function. The retrieved colorcode for the area on which the user clicked can be used as an indexer on the color code to product identifier mapping. The product identifier is used as a indexer on the product identifier to product data mapping.

function mapClick(e) {
    var PosX = e.pageX;
    var PosY = e.pageY;
    var position = FindPosition(targetImage);
    var readX = PosX - position[0];
    var readY = PosY - position[1];

    if (!areaMapContext) {
        prepareMap(targetImage.width, targetImage.height);
    }
    var pixelData = areaMapContext.getImageData(readX, readY, 1, 1).data;
    var newState = getStateForColor(pixelData[0], pixelData[1], pixelData[2]);
    var selectedProduct = productData[newState];
    showProduct(selectedProduct);
}

Once could simplify the mappings by having the color data map directly to product information. I chose to keep the two separated though. If the color scheme were ever changed (which I think is very possible for a number of reasons) I thought it better that these two items of data be decoupled from each other.

You can find the full source code for this post on GitHub at this url. Because of security restrictions in the browser, you must run this code within a local HTTP server. Attempting to run it from the file system will fail due to limitations in how an application can use the data it loads when loaded from a local file. I also have brief videos on my social media account to walk through the code.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Stadia Cancelled

Recently, Google announced that it will be bringing Stadia to an end. Stadia was Google’s game streaming service. With the purchase of a controller and a Chromecast, users could take advantage of GPUs in the cloud to play their games. To use the service, there was both a monthly fee for access and games had to be individually purchased. The cancellation comes at a time closely following reports that Sundar Pichai told employees of plans for cost cuts to make Google more profitable[src]. Google also released a blog post [src] stating that the service didn’t get the traction they had hoped.

A few years ago, we also launched a consumer gaming service, Stadia. And while Stadia’s approach to streaming games for consumers was built on a strong technology foundation, it hasn’t gained the traction with users that we expected so we’ve made the difficult decision to begin winding down our Stadia streaming service.

I’m in possession of a number of Stadia units myself. Speaking only for myself, it didn’t have faith in the product. I carry skepticism of online-only games, having lost games to smaller incidents before. One of the first XBox 360 games I had purchased online became inaccessible to my other Xbox’s when the publisher left the store. I’ve had other games that installed locally that have become less functional or completely non-functional because a server was taken offline. When I do purchase an app or a game, I do so knowing that the app could become unusable and unavailable. I don’t mind paying small amounts for what I see as temporary access to a game or app. But paying $60 for a game that could essentially evaporate out of my account was not comfortable for me. Nor was the ongoing 10 USD/month I’d pay for access to the game.

I don’t know why other people might not have jumped on board. Especially with the component shortage making the acquisition of Xbox and PlayStation challenging. Those units have been out for two years and we still are not in a place where someone can walk into a retail store and have high confidence that there will be a unit on the shelf to pickup and purchase.

What does this all mean for those that may have purchased games? The outcome financially is most favourable. They are getting a refund from Google.

We’re grateful to the dedicated Stadia players that have been with us from the start. We will be refunding all Stadia hardware purchases made through the Google Store, and all game and add-on content purchases made through the Stadia store. Players will continue to have access to their games library and play through January 18, 2023 so they can complete final play sessions. We expect to have the majority of refunds completed by mid-January, 2023. We have more details for players on this process on our Help Center.

The Chromecasts of course are still very usable and functional. The controllers themselves are, as far as I can tell, e-waste now. Out of curiosity, I looked up the Stadia in the Google Store. It is still listed on the store with no ability to make a purchase.

I very much wish that Google would release the source code or some other information so that the community could make the controllers useful. But since they are giving full refunds, I don’t think that they will be doing much more. The only things that people might have lost is their saved games (for games that do not support progress cross play, as Destiny did). According to a post on Reddit, Google did acknowledge the desire for the controllers to remain useful after the shutdown. But no promises were made [src].


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Uploading Large Files in Express

I’m back from taking a break. I’ve been working on a number of projects that culminated a crunch-time scenario where there was lots to get done before I went on a vacation for which I would have no computer and an Internet connection that would range from flaky-to-none. When going on trips I often bring some movies with me to watch when I’m taking a break and retreating to my lodging. For this trip, I did something a bit different than usual. Instead of copying my media directly to my mobile device, I decided I would copy it to a Raspberry Pi and set it up as a movie streaming server. (More on elements of my particular scenario at the end of this post) Early last year I had posted something on using Express and NodeJS for streaming. For this trip, I copied that to a Pi that had a lot of storage and did some other things to prepare it for my environment. These included:

  • Setting up the Pi as a Wireless Access Point
  • Setting my Application to Run on Boot
  • Adding the Ability to Upload files

While I didn’t perform these changes during my trip, I realized there was some other functionality I wanted this solution to have, including

  • Converting videos of other formats to MP4
  • Ensuring the MP4’s metadata was arranged for streaming
  • Adding some service to convert uploaded files as needed

Many of these changes could be topics of their own. Let’s talk about uploading files.

Uploading a File with express-fileupload

My first solution for uploading a file was to make a simple web form allowing a file to be selected and add a router to my web application to handle file uploads. I’ll start off telling you that this didn’t work. Skip ahead to the next section if you don’t want to read about this initial failed attempt. The webform itself is simple. A minimalist version of it follows.

<form action='/upload' enctype='multipart/form-data'  method='post'>
	<input type='file' id='videoUploadElement' name='video' />
	<input type='submit' />
</form>

On the server-side, I added the a router to accept my file uploads and write them to a target folder. Upon receiving the bits of the file, I’m writing it to a designated folder preserving the original file name.

//Do not use. For demonstration purposes only. While this
//appears to work. But the entire file is loaded to memory
//before being written to disk. I've found with large files
//this can easily slow down or halt the Pi. Use the router

const express = require('express');
const router = express.Router();

router.post('/', async(req,res) => {
	try {
		if(!req.files) {
			console.log(req.body);
			res.send({
				status: false,
				message: 'No File'
			});
		} else {
			let video = req.files.video;
			video.mv('./uploads/' + video.name);
			res.send({
				status: true,
				message: ' File Uploaded',
				data: {
					name: video.name,
					mimetype: video.mimetype,
					size: video.size
				}
			});
		}
	} catch(err) {
		res.status(500).send(err);
	}
});
module.exports = router;

Testing it out on a small file, it works just fine. Testing it out on a large file locks up the Pi. The issue here is that **all** of the file must be loaded to memory before it is written to storage. The files I was uploaded were feature-length high-definition videos. I tried uploading a 4.3 gig video and saw the Pi progress from less responsive to all responsive. This wasn’t going to work for my needs.

Uploading and Streaming to Storage with Busboy

Instead of using express-fileupload I used connect-busboy. Using Busboy I’m able to stream the bits of the file to storage as they are being uploaded. The complete file does **not** need to be loaded to memory. When the upload form sends a file, the busboy middleware makes the file’s data and meta-data available through the request object. To save the file, create a file stream to which the data will be written and pipe the upload (busboy) to the file storage stream.

//Uses Busboy. See: https://github.com/mscdex/busboy

const express = require('express');
const router = express.Router();
const path = require('node:path');
const Busboy = require('connect-busboy');
const fs = require("fs");

const UPLOAD_FOLDER = 'uploads';

if(!fs.existsSync(UPLOAD_FOLDER)) {
    fs.mkdirSync(UPLOAD_FOLDER, { recursive: true });
}

router.post('/', async(req,res, next) => {
	try {        
        
        console.log('starting upload');        
        console.log(req.busboy);
        req.busboy.on('field', (name, val, info)=> {
            console.log(name);
        });
        
        req.busboy.on('file', (fieldname, uploadingFile, fileInfo) => {
            console.log(`Saving ${fileInfo.filename}`);
            var targetPath = path.join(UPLOAD_FOLDER, fileInfo.filename);
            const fileStream = fs.createWriteStream(targetPath);
            uploadingFile.pipe(fileStream);
            fileStream.on('close', ()=> {
                console.log(`Completed upload ${fileInfo.filename}`);
                res.redirect('back');
            });
        });
        req.pipe(req.busboy);
    } catch (err) {
        res.status(500).send(err);
    }
});

module.exports = router;

A fairly complete version of my app.js follows, including the reference to the router that has the streaming upload method.

const express = require('express');
const createError = require('http-errors');
const path = require('path');
const { v4: uuidv4 } = require('uuid');
const Busboy = require('connect-busboy');
require('dotenv').config();

app = express();
app.use(Busboy({
   immediate: true,
   limits: 10* 1073741824,
   highWaterMark: 4 * 1048576, // Set 4 megabyte buffer
}));

app.use(express.static('public'));


const libraryRouter = require('./routers/libraryRouter');
const videoRouter = require('./routers/videoRouter');
const uploadRouter = require('./routers/uploadRouter');
const streamRouter = require('./routers/streamRouter');

app.use('/library', libraryRouter);
app.use('/video', videoRouter);
app.use('/upload', uploadRouter);
app.use('/streamUpload', streamRouter);


app.use(function (req, res, next) {
   console.log(req.originalUrl);
   next(createError(404));
});
app.set('views', path.join(__dirname, 'views'));
app.engine('html', require('ejs').renderFile);
app.set('view engine', 'html');

With this in place, I have a working upload feature. I don’t want to stop there though. FFMPEG runs on the Pi and I may take advantage of this to do additional transformations on media to prepare it for streaming. I’m going to add code for handling this processing next.

GEeting the Code

If you want to try the code out, it is on GitHub. You can find it here: https://github.com/j2inet/VideoStreamNode.

About my Scenario

I only brought the Pi for movies, but I ended up using it for a lot more. I had an iPhone and an Android phone with me on the trip. I managed to fill up the memory on my iPhone and needed a place to move some video. The Android phone had plenty of space, but there was no direct method to transfer data between the two devices. At one point I also wanted to edit video using Adobe Rush, only to find that Adobe doesn’t allow Rush to be installed on Samsung’s latest generation of phones at this time. My iPhone had Rush, thus I had to get information transferred from my Android Phone to my iPhone to do this.

Had these use cases come to mine I probably would have acquired a USB adapter for my Phone (the Android phone will connect to my USB-C drives with no special adapters needed). I’m sure there are apps that could be used to bridge this gap, but my iPhone has no coverage when I’m out of the country. I didn’t have access to try out such apps. Some part of me feels that it is a bit silly that data can’t move between these two systems without some intermediary, such as another computer, or dongle and drives. But given that we are still not at a point where media in text messages is reliably transferred without quality loss it isn’t surprising.

Given more time, I might make an app specifically for transferring data between devices in the absence of a common network. (This is not something I am promising to do though).


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Plant Timelapse

Photography is among my interests. I decided to experiment with time lapse photography using plants. To create a time lapse video, someone needs a camera on which they can take photos at timed intervals and a software to assemble those photographs together as a video. There are many solutions for doing this. Over time I will try out several different software programs, cameras, setups, and subjects. For this first attempt, I used a GoPro. I have a GoPro 5. It is an older model. Currently, the most recent GoPro version available is a GoPro 10. All of these models have time lapse photography settings built into the device. You choose your camera settings; select the time interval between photographs; aim the camera at your subject; and let it get started. With these cameras you can also specify that you are doing a video and it will assemble the photos into a video for you.

When doing a time lapse shot, you want to leave the setup undisturbed. But you will also want to know how things look so that you can make corrections. To this end, I let the GoPro run for a few hours and stopped it to look at the results. When I did this, I found that my original settings of taking a photo once a second was too frequent. It would fill up the memory card that I was using too fast. I also found that i didn’t like my original angle. I made adjustments, let another test run for another hour, and was content. I set things up and let them run. The results were okay overall, but there was still plenty of room for improvement. The first item of improvement was in the lighting. While aesthetically I liked the look, the light wasn’t sufficient for the plant. In my timelapse you will see that the plants grow up long and skinny. This is something that plants would do while underground with little light exposure. They would do this until they get sufficient light and then transition from growing up to growing out. Because of the insufficient lighting, these plants used a lot of their resources trying to grow up to get more light.

Towards the end of my timelapse, I pulled out one of my DSLRs. (I feel that DSLRs are ancient given the major camera manufacturers have transitioned to mirrorless. But it still works, and I keep using it). I have an intervalometer for my camera. This is a timing device that can be used to trigger camera. I set it up for 10 second interviews, just like the GoPro and let it run during the last day of the 10 days that it took for me to get my time lapse shots. The results were much better. Comparing the two, the DSLR will by my go-to device for time lapse shots. That’s not to say the GoPro is out. The GoPro is much more tolerant to various conditions, especially outdoor conditions. I’ll be using it for some outdoor time lapse shots fairly soon. Though the results will be far off in the future.

One of the issues here is that the lighting conditions that give the photo the look that I want might not be the conditions under which the plant can thrive. I started to imagine solution, and I thought a solution that may work is having a light that turns off of changes brightness in sequence with the photos. Full lighting conditions would be applied most of the time. But the moment just before to just after the shot being taken, the dimmer lighting conditions could be used. I’ve got a DMX controller and thought about using it. But that could be over kill. I thought about using a relay controlling a power source. But after a lot more thinking, I realized I already have a solution. My hue lighting. The Phillips hue lights are controllable via rest calls. I could have a pi dedicated to controlling the lights of interest.

The light switching must be coordinated with the camera. My intervalometer would not work for this. While I could probably get a working time sequence up front, over the course of days the intervalometer and the light sequencing could drift out of sync with each other. I need to have the Pi control the camera too. I’ve written before on controlling Hue lighting from the Pi. I think that could be used here. Now as soon as I get free time from work and other obligations, I’ll be looking into controlling the digital camera from a Pi. Some of the libraries that I’ve looked at appear to be capable of controlling both the traditional DSLRs and the more modern mirrorless cameras.

I’ve gotten some seeds for corn, okra, and peppers planted now. Once they sprout, I’ll start my next time lapse with a more advanced setup.


re_Terminal::Industrial CM4 Case with a Screen

For me the options for adding a screen to a Raspberry Pi have always come with a bit of dissatisfaction. This isn’t because of any intrinsic flaws in the designs. The Pi, having its own thickness which has contributed to solutions that have form factors that are not quite my preference. This started to change with the release of the Raspberry Pi Computer Modules. With the Raspberry Pi Compute Module 4 I see some satisfying solutions. One of the solutions has available a plan for a 3D printable case. Another comes already encased. I chose a solution that already has a case because I don’t have a 3D printer and I’ve had mixed results in using third-party printers. The solution that I selected is the Seeed Studio reTerminal.

Video covering the Seeed Studio reTerminal

Before speaking on it more, I want to point out that this case does not have a battery. If you are seeking a solution with a battery, then you may want to consider the solution with the 3D print designs and alter it to hold a battery.

The unit is sold with a Raspberry Pi Compute Module 4 (CM4) included. Right now, the unit is sold with the CM4 that has Wi-Fi, 4 gigs of RAM, and 32 Gigs of eMMC. This is great, as it is near impossible to get a CM4 by itself in present-day times. The packaging for the unit uses more flexible wording saying “Up to 8GB RAM/Up to 32GB EMMC” suggesting that at some point they may see the unit with other variants. The only indication of what CM4 module is in the box is a sticker with the barcode that spells out the CM4 version (CM4104032).

The display on the unit itself is 720×1280 pixels. It sounds like I said these dimensions reverse. I haven’t. Using the direction in which the pixels are refreshed, the first line of scan is the left area of the screen and it works its way to the right. This differs from conventional displays that start at the top and work their way down. Accessible through the case is gigabit Ethernet, 2 USB 2.0 ports, the Pi 40-pin header, and an industrial high-speed expansion interface. This unit was designed with industrial applications in mind. Though I won’t be paying attention to this industrial interface. The case also has a real-time clock, a cryptographic coprocessor, a few hardware buttons including a button to power the unit on, an accelerometer, and a light sensor. Out of the box the software needed for this additional hardware is preinstalled. Should you choose to reinstall the operating system yourself, you will need to install the software and drivers for the additional hardware manually.

Component Layout Diagram from Seeed Studio

The packaging for the unit contains extra screws, a screwdriver, and the reTerminal unit itself. On the lower side of the unit is a connector for a 1/4-inch screw. This is the same sized screw used by many camera tripods. I’m using one of the mini desktop tripods for my unit. To power the unit on all that is needed is to connect power to the USB-C connector on the left side of the reTerminal.

The unit does not ship with an on-screen keyboard installed. For initial setup, I will want to have at minimum a USB-C power supply and a keyboard. If you do not have a mouse, you could use the touch screen just fine.

reTerminal Specific Hardware

I’ve mentioned a number of hardware items contained within the reTerminal, such as the custom buttons. Accessing the additional hardware and interfaces is easier that I expected. The four buttons on the front have been mapped to the keyboard keys A, S, D, and F. If you would like to map these to different keys, they can be set through /boot/config.txt. Within that file is a line that looks similar to the following.

dtoverlay=reTerminal,key0=0x041,key1=0x042,key3=0x43,key3=0x044

The hex numbers are ASCII codes for the characters these keys will generate. You can change these as needed.

LEDs and Buzzer

There are four positions for LEDS at below the screen of the unit. Two of those positions have LEDs that are controllable through software. The positions are labeled STA and USR. USR has LED 0 (green). Position STA las LEDS 1 (red) and 2(green). Because of the two LEDs behind position STA, the perceived color of the position can range from green to yellow to red. Control of the LEDs is available through the file system. In the directory /sys/class/leds are the subdirectories usr_led0, usr_led1, and usr_led2. Writing a text string with a number between the range of 0 and 255 to a file named brightness will set the brightness of the LED with 0 being off and 255 being full brightness. Note that root access is needed for this to work.

According to documentation, the brightness of the LEDs is changed with this number. But in practice, each led appear to be binary. I don’t see any difference in brightness for a value of 1 and of 255.

The Buzzer is treated like the LED, but only has a “brightness” level ranging from 0 (off) to 1. The name of the device directory for the buzzer is /sys/class/leds/usr_buzzer. Like with the LEDs, write to a file named brightness.

Real Time Clock

The real-time clock is connected to the I2C interface of the CM4. The command line utility hwclock works with the clock.

Light Sensor

The light sensor is exposed through the file path /sys/bus/iio/devices/iio:device0. Reading from this file will expose the brightness value.

Accelerometer

The accelerometer in the unit is a ST Microelectronics LIS3DHTR. This hardware can be used to automatically change the screen orientation, or for other applications. To see it in action, you can use the evtest tool that Seeed Studio preinstalled on the device. Running evtest and selecting the index for the accelerometer hardware will result in it displaying the readings for each axis.

My Setup

As per my usual, after I had the Pi up and running there were a few other changes that I wanted to apply.

Testing the Hardware

For testing much of the above-mentioned hardware, root access is needed. I would prefer to avoid using root access. I first tried to grant permission on the needed files to the user pi. Ultimately, this doesn’t work as planned. The file paths are sysfs paths. This is part of a virtual file system used for accessing hardware. It gets recreated on each reboot. Changes made do not persist. But if you wanted to grant permissions that are available until the next reboot, you could use the following. Otherwise, you’ll need to run your applications that use this additional hardware as root.

#enter interactive root session
sudo -i
#navigate to the folder for LEDs and the buzzer
/sys/class/leds/
#grant permission to the pi user for the brightness folder
chown pi usr_led0/brightness
chown pi usr_led1/brightness
chown pi usr_led2/brightness
chown pi usr_buzzer/brightness

#grant permission to the light sensor
chown pi /sys/bus/iio/devices/iio:device0

#exit the root session
exit

Some of the hardware uses the SPI and I2C interfaces. Using the Raspberry Pi Config tool, make sure that these interfaces are enabled.

Install the tool for input event viewing. The tool is named evtest.

sudo apt-get install evtest -y

Once installed, run evtest. Note that even if you are using SSH to enter commands into your terminal that this tool still works. The tool will list the input devices and prompt you to select one.

 $ evtest
No device specified, trying to scan all of /dev/input/event*
Not running as root, no devices may be available.
Available devices:
/dev/input/event0:      Logitech K400
/dev/input/event1:      Logitech K400 Plus
/dev/input/event2:      Logitech M570
/dev/input/event3:      gpio_keys
/dev/input/event4:      ST LIS3LV02DL Accelerometer
/dev/input/event5:      seeed-tp
/dev/input/event6:      Logitech K750
/dev/input/event7:      vc4
/dev/input/event8:      vc4
Select the device event number [0-8]: 3

The actual order and presence of your options may vary. In my case, you can see the devices associated with a Logitech Unify receiver that is connected to the device. The hardware buttons that are on the device are represented by the device gpio keys. For me, this is option 3. After selecting three, as I press or release any of these buttons, the events print in the output. Remember that by default these buttons are mapped to the keys A, S, D, and F. This is reflected in the output.

Input driver version is 1.0.1
Input device ID: bus 0x19 vendor 0x1 product 0x1 version 0x100
Input device name: "gpio_keys"
Supported events:
  Event type 0 (EV_SYN)
  Event type 1 (EV_KEY)
    Event code 30 (KEY_A)
    Event code 31 (KEY_S)
    Event code 32 (KEY_D)
    Event code 33 (KEY_F)
    Event code 142 (KEY_SLEEP)
Properties:
Testing ... (interrupt to exit)
Event: time 1651703749.722810, type 1 (EV_KEY), code 30 (KEY_A), value 1
Event: time 1651703749.722810, -------------- SYN_REPORT ------------
Event: time 1651703750.122811, type 1 (EV_KEY), code 30 (KEY_A), value 0
Event: time 1651703750.122811, -------------- SYN_REPORT ------------
Event: time 1651703750.832809, type 1 (EV_KEY), code 31 (KEY_S), value 1
Event: time 1651703750.832809, -------------- SYN_REPORT ------------
Event: time 1651703751.402797, type 1 (EV_KEY), code 31 (KEY_S), value 0
Event: time 1651703751.402797, -------------- SYN_REPORT ------------
Event: time 1651703751.962817, type 1 (EV_KEY), code 32 (KEY_D), value 1
Event: time 1651703751.962817, -------------- SYN_REPORT ------------
Event: time 1651703752.402812, type 1 (EV_KEY), code 32 (KEY_D), value 0
Event: time 1651703752.402812, -------------- SYN_REPORT ------------
Event: time 1651703753.132807, type 1 (EV_KEY), code 33 (KEY_F), value 1
Event: time 1651703753.132807, -------------- SYN_REPORT ------------
Event: time 1651703753.552818, type 1 (EV_KEY), code 33 (KEY_F), value 0
Event: time 1651703753.552818, -------------- SYN_REPORT ------------

Since we are speaking of evtest, exit it with CTRL-C and run it again. This time select the accelerometer. A data stream of the accelerometer values will fly by. These are hard to visually track in the console. But if you reorient the device, if you are able to track one of the readings, you will see it change accordingly.

Event: time 1651705644.013288, -------------- SYN_REPORT ------------
Event: time 1651705644.073140, type 3 (EV_ABS), code 1 (ABS_Y), value -18
Event: time 1651705644.073140, type 3 (EV_ABS), code 2 (ABS_Z), value -432
Event: time 1651705644.073140, -------------- SYN_REPORT ------------
Event: time 1651705644.133259, type 3 (EV_ABS), code 1 (ABS_Y), value 18
Event: time 1651705644.133259, type 3 (EV_ABS), code 2 (ABS_Z), value -423
Event: time 1651705644.133259, -------------- SYN_REPORT ------------
Event: time 1651705644.193161, type 3 (EV_ABS), code 0 (ABS_X), value 1062
Event: time 1651705644.193161, type 3 (EV_ABS), code 1 (ABS_Y), value 0
Event: time 1651705644.193161, type 3 (EV_ABS), code 2 (ABS_Z), value -409
Event: time 1651705644.193161, -------------- SYN_REPORT ------------
Event: time 1651705644.253290, type 3 (EV_ABS), code 0 (ABS_X), value 1098
Event: time 1651705644.253290, type 3 (EV_ABS), code 2 (ABS_Z), value -405
Event: time 1651705644.253290, -------------- SYN_REPORT ------------

Light Sensor

Getting a value from the light sensor is as simple as reading a file. From the terminal, you can read the contents of a file to get the luminance value.

cat /sys/bus/iio/devices/iio:device0/in_illuminance_input

HDMI and Screen Orientation

I earlier described the screen as having a resolution of 1280×720. That isn’t quite correct. It is 720×1280. It might look like I just reversed the numbers. I did. Typically screens refresh from top to bottom. This screen refreshes from left to right. You can check this for yourself by grabbing a window and moving it around rapidly. Some screen tearing will occur exposing the way in which the screen is rendering. If you were to make a fresh install of Raspbian or Ubuntu, this will be more apparent because the screen will be oriented such that the left edge is the top of the screen and the right edge is the bottom. If you would like to manage the orientation of the display and external displays that the reTerminal is connected to, there is a screen orientation utility to install for managing the layout.

sudo apt-get install arandr -y

Future Expansion

I don’t give much weight to plans for future products in general since there is no guarantee that they will materialize. But I’ll reference what Seeed Studio has published. The “Industrial High-Speed Interface” connects to a number of interfaces on the CM4. This includes the interfaces for PCIe, USB 3.0, and SDIO 3.0. Seeed Studio says that it plans to make modules available for connecting to this interface, such as a camera module, speaker and mic array. PoE, 5G/4G modems, so on.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Remote Desktop on the Pi

While I enjoy being productive on my pi over SSH, there are times when I need to access the desktop environment. Rather then be bound to the display on which the Pi is connected (if it is connected to one, some of my Pis have no display) I decided to setup Remote Desktop on the Pi. Most of the computers that I use are Windows machines and already have a remote desktop client. (Note: another option is VNC). I did this for my Jetsons as well. While the same instructions often work on both the Jetson and the Pi, this is not one of the situations where that was the case. I have another entry coming on how to perform the necessary steps on the Jetson.

On the Pi, there are only a few steps needed. Like many installations, start with updating your packages. Open a terminal and enter the following commands.

sudo apt-get update
sudo apt-get upgrade

This could take a while to run depending on how long it has been since you last ran it and how many packages that there are to install. After it completes, use the following commands to install a remote desktop server to your Pi.

sudo apt-get install xrdp -y
sudo service xrdp restart

Once the installation is done, you need to get your Pi’s IP address.

ifconfig

You should see the addresses for your Pi’s network adapters listed. There will be several. My Pi is connected via ethernet. I need the address from the adapter eth0.

Response from ifconfig.

Once you have that information, you are ready co connect. Open the Remote desktop client in your computer and enter your Pi’s IP address as the identifier for the target machine. Once you enter it, you will be greeted with a second login screen that ask you information for the session your wish to start.

PI RDP Login

Leave the default setting of the Session as Xorg. Enter the user ID and password for your Pi. A few moments later you will see the Pis desktop. Note that while many remote desktop clients will default to using the resolution of your local computer’s display, you also have the option of setting a resolution manually. You may want to do this if you are on a slower network connection, or even if you just do not want your remote session to cover all of your local desktop.

Remote Desktop Client Resolution Settings

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet


Sun Gazing Equipment

Today was a nice day. The weather was sunny, but not hot and the sky was fairly clear. I already had my telescope in my car for plans that were not starting until after sunset. But I decided to do a bit of sun gazing while the sun was up. “Sun gazing” is a term that might raise a bit of concern since looking at the sun directly can be damaging to one’s vision. Don’t worry, I wasn’t doing that. I was using proper equipment. I grabbed some video clips from my gazing and shared them on my YouTube and Instagram accounts. This post gives further information about that video.

Acquired for the 2017 eclipse, I have a solar filter that covers my telescope’s opening. These filters block more than 99.9% of sunlight. A hole even as small as a pin head would render the filter unusable by letting too much light in. Without the filter, simply pointing the telescope at the sun could be damaging; there could be heat buildup inside the telescope, and whatever is on the viewing end of the telescope will suffer serious burns with exposure of only a moment.

I have a couple of telescopes at my disposal, but that telescope on the motorized mount is generally preferred for a couple of reasons. One is that it automatically points at the planet, star, or nebula that I select from a menu in a hand controller (after some calibration). Another is that it will automatically adjust in response to the earth’s rotation. This last item might not sound significant, but it is! With my manual telescope, once I’ve found a heavenly body, the body is constantly rotating out of view. With proper alignment the body can be tracked by turning a single knob. But it can be a bit annoying when one looks away for a moment only to return and must hunt down the body of interest. The downside of the motorized mount is the weight and the need for electricity. My full motorized telescope setup is over 100 pounds. At home this isn’t a problem, as I can carry the full assembled setup in and out of my home and connect it to my house’s power. For usage in other locations, I must either bring power with me or have my car nearby to provide electricity.

CGEM II 800 Edge HD

My telescope is a much older unit. It is a Celestron CGEM 800. This specific model is no longer sold since it has been replaced with newer models. With the CGEM 800, there were additional accessories I purchased to add functionality that comes built into some other models. I added GPS to my telescope, which enables it to get the time, date, and the telescope’s location (all necessary information for the telescope to automatically aim at other bodies). I’ve also added WiFi to my telescope. With WiFi, I can control the scope from an app on a mobile device. For some scenarios, this is preferred to scrolling through menus on the two-line text only display on the scope’s hand controller.

While one won’t be viewing any sunspots with it, I also keep a set of eclipse glasses with my setup. I use these when aligning the telescope with the sun. While they are great for looking at the sun, you won’t be able to see anything else through them🙂. If you want to be able to see more details you would need a telescope that filters out specific wavelengths of light. The Meade Solarmax series are great for this. But they are also expensive and only useful for viewing the sun.

Meade Solarmax II
Picture taken from Meade SolarMax (source)

These telescopes cost about 1,800 USD.

At this time of the year from where I live, there are only a few bodies from the solar system visible; the sun and the moon. If I were to use the telescope at 5AM I might be able to catch a glimpse of another planet just before the sun begins to wash out the quality of the image. Not something I’m interested in doing. I’ll take the telescope back out later in the year when there is an opportunity to see more.

On another YouTube channel someone mentioned they thought it would be cool if it were possible to control a telescope with a Raspberry Pi. Well, it’s possible. I might try it out. I’ve controlled my telescope from my own software before, and may try doing it again. Later in the year when the other planets are visible, it might be a great solution for controlling the telescope and a camera to get some automated photographs.

NVIDIA Edge Computing Introduction May 12

NVIDIA is holding a session on an introduction to Edge Computing. The introduction is said to cover fundamentals, how to integrate edge computing to your infrastructure, which applications are best deployed to the edge, and time for Q&A. The conference is at no cost. If you’d like to register for the conference, use this link.



Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Booting a Pi CM4 on NVME

I go through a lot more SD cards than a typical person. I’m usually putting these cards in single board computers like the Jetson Nano or the Raspberry Pi and am using them there. I have a lot of these devices. These cards only occasionally fail. But with a lot of devices “occasional” is frequent enough for my rate of card consumption to be higher than that of a typical consumer. The easy solution to this is to just not use SD cards. At this point, the Pi can boot of USB drives. I’ve generally resisted this for reasons of æsthetics. I just don’t like the U-shapped USB connector (feel free to tell me how silly that is in the comments section).

Enter the Raspberry Pi CM4. These modules have a PCIe interface, and you can select the board that has the hardware that you need. One of those boards is the WaveShare CM4-IO-Base. Among other hardware, this board has a PCIe M.2 keyed slot. There are two versions of this board, version A and version B. The main difference between these is the model B has a real time clock while the model A does not. Otherwise, these boards can be treated as identical

The CM4 IO-BASE-B that I am using sandwiched between acrylic cutouts.

The CM4-IO-BASE has screw holes in positions that are identical to what one would expect for a Raspberry Pi 4B. This makes it compatible with a number of bases on which you might want to attach the board. It does differ from the Pi 4B in that it uses a full-sized HDMI port placed where two of the USB ports are on the Pi 4B. At first glance, it appears to give you less USB and HDMI options than the Pi 4B. But two USB connections and an HDMI connection is available from the underside of the board. You would need to purchase the HDMI+USB Adapter to use those or interface to them directly.

The top of the board has two connectors for cameras, and a connector for an external display. The feature of interest to me was the M.2 PCIe interface on the underside of the board. I decided on a M.2 2242 drive with 256 Gigs of capacity. I’ve seen drives of this size up to 2-terabytes in size (for significantly more).

Getting the Pi to bootup from the NVME isn’t hard. The Compute Module that I have has eMMC memory; that’s basically like having an SD card that you can’t remove. Getting the Pi to boot from the NVME drive involves writing the Pi OS to the MVNE drive and changing the boot order on the Pi. For changing the boot order on the Pi I needed another Linux device. I used another Raspberry Pi.

Writing the image to the NVME drive works in the same way that you would write the image to any other SD card. I happen to have some external NVME drive enclosures removed the drive from one of them and placed my Pi’s NCME drive in it. The Raspberry Pi Imager accepted the drive as a target and wrote the OS to it. The tricky part was modifying the boot order on the CM4.

NVME Drive Enclosure

The default boot order on the CM4 is 0xF461. This is something that didn’t make sense to me the first time that I saw it. The boot order is a numeric value that is best expressed as a hex number. Each digit within that number specifies a boot device. The Pi will start with the boot device that is specified in the lowest hex digit and try it first, and then go to the next hex digit.

DigitDevice
0x1SD
0x2Network
0x3RPI Boot
0x4USB Mass Storage
0x5CM4 USB-C Storage Device
0x6NVME Drive
0xEStop/Halt
0xFReboot
Raspberry Pi BOOT_ORDER

For the boot order 0xF461 the Pi will try to boot to devices in the following order.

  • 0x1 – Boot from the SD Card/eMMC
  • 0x6 – Boot from the NVME drive
  • 0x4 – Boot from a USB mass storage device
  • 0xF – Reboot the pi and try again.

If you have a CM4 with no memory, this means that all you need to do to ensure that the right boot order is followed is to ensure that you don’t have an SD card connected to the board. You are ready to boot from the NVME drive. That’s not my scenario, I had more work to do. I updated the boot order alongside the Pi’s firmware. The CM4 is usually in one of two modes. It is either running normally, in which case the boot loader is locked, or it is in RPI Boot mode, in which case the bootloader can be written to, but the OS isn’t running. The CM4 cannot update its own bootloader. To update the bootloader, another computer is needed. I think that the best option for updating the boot loader is another Linux machine. In my case, I chose another Raspberry Pi.

The Raspberry Pi can already b picky about the power supplies that it works with. I used a USB-C power supply from a Raspberry Pi 400 (the unit built into the keyboard) for the following steps. The usual power supply that I used with my Pi wasn’t sufficient for powering 2 Pis. You’ll find out why it needed to work for 2 Pis in a moment. I used a Raspberry Pi 4B for writting the firmware to the CM4. To avoid confusion, I’m going to refer to these two devices as Programmer Device and the CM4.

On the CM4-IO-BASE board there is a switch or a jumper (depending on hardware revision) for switching the Pi to RPI Boot Mode. Set a jumper on this pin or turn the switch to “ON”. Connect the CM4 to the Programmer Device with a USB-A to USB-C cable. From the programmer device, you will need to replicate a GitHub repository that has all of the code that you need. Open a terminal on the Programmer Device, navigate to a folder in which you want to replicate the code, and use the following commands to clone and build the code.

git clone https://github.com/raspberrypi/usbboot --depth=1
cd usbboot
make

The code is now downloaded or built. Enter the recovery folder and edit the file named boot.conf to change the boot order.

cd recovery
nano boot.conf

At the time of this writing, that file looks like the following.

[all]
BOOT_UART=0
WAKE_ON_GPIO=1
POWER_OFF_ON_HALT=0

# Try SD first (1), followed by, USB PCIe, NVMe PCIe, USB SoC XHCI then network
BOOT_ORDER=0xf25641

# Set to 0 to prevent bootloader updates from USB/Network boot
# For remote units EEPROM hardware write protection should be used.
ENABLE_SELF_UPDATE=1

The line of interest is BOOT_ORDER=0xf25641. The comment in this file already lets you know how to interpret this line. You want the NVME drive (0x6) to be the first drive. To make that happen, the digit 6 needs to be the last digit. Change it to 0xf25416. With this change, the CM4 will try to boot from the NVME first and the eMMC second. IF you ever want to switch back to using the eMMC you only need to remove the NVME drive. There is a file named pieeprom.original.bin. This is going to be written to the CM4. To ensure that the CM4 has the latest [stable] firmware, downloaded the latest version from https://github.com/raspberrypi/rpi-eeprom/tree/master/firmware/stable and overwrite this file. Looking in that folder right now, I see the most recent file is only 15 hours old and named pieeprom-2022-02-10.bin. To download this from the terminal, use the following command.

wget https://github.com/raspberrypi/rpi-eeprom/raw/master/firmware/stable/pieeprom-2022-03-10.bin -O pieeprom.original.bin

After the file is downloaded, run the update script to assemble the new firmware image.

./update-pieeprom.sh

Navigate to the parent folder. Run the rpiboot utility with the recovery option to write the firmware to the device.

sudo ./rpiboot -d recovery

This command should only take a few seconds to run. When it is done you should see a green light blinking on the Pi signaling that it has updated its EEPROM. Disconnect the CM4 from the Programmer Device. Remove the jumper or set the RPI Boot switch to off. Connect the Pi to a display and power supply. You should for a brief moment see a message that the Pi is expanding the drive partition. After the device reboots it will be running from the NVME.

At this point my primary motivation for using the CM4-IO-BASE-B board has been achieved. But there is some additional hardware to consider. If you have the CM4-IO-BASE model B then there is a real time clock to setup. For both models, there is fan control available for setup.

Real Time Clock Setup

The real time clock interfaces with the Pi via I2C. Ensure that I2C is enabled on your Pi by altering the file boot/config.txt.

sudo nano /boot/config.txt

Find the line of the file that contains dtparam=audio=on and comment it out by placing a # at the beginning of the line. Add the following line to config.txt to ensure I2C is enabled.

dtparam=i2c_vc=on

Reboot the device. With I2C enabled you can now interact with the RTC through code. Waveshare provides sample code for reading and writing from the clock. The code in its default state is a good starting point, but not itself adequate for setting the clock. The code is provided for both the C language and Python. I’ll bu using the C-language version of the code. To download the code, use the following commands.

sudo apt-get install p7zip-full
sudo wget https://www.waveshare.com/w/upload/4/42/PCF85063_code.7z
7z x PCF85063_code.7z -O./
cd PCF85063_code

After downloading the code, enter the directory for the c-language project and build and run it using the following commands.

cd c
sudo make clean
sudo make -j 8
sudo ./main

You’ll see the output from the clock. Note that the clock starts from just before midnight of February 28, 2021 and progresses into March 1. The code has the starting date hard coded. Let’s look at the code in main.c to see what it is doing.

#include <stdio.h>		//printf()
#include <stdlib.h>		//exit()
#include <signal.h>     //signal()

#include "DEV_Config.h"
#include <time.h>
#include "waveshare_PCF85063.h"

void  PCF85063_Handler(int signo)
{
    //System Exit
    printf("\r\nHandler:exit\r\n");
    DEV_ModuleExit();

    exit(0);
}

int main(void)
{
	int count = 0;
	// Exception handling:ctrl + c
    signal(SIGINT, PCF85063_Handler);
    DEV_ModuleInit();
    DEV_I2C_Init(PCF85063_ADDRESS);
	PCF85063_init();
	
	PCF85063_SetTime_YMD(21,2,28);
	PCF85063_SetTime_HMS(23,59,58);
	while(1)
	{
		Time_data T;
		T = PCF85063_GetTime();
		printf("%d-%d-%d %d:%d:%d\r\n",T.years,T.months,T.days,T.hours,T.minutes,T.seconds);
		count+=1;
		DEV_Delay_ms(1000);
		if(count>6)
		break;
	}
	
	//System Exit
	DEV_ModuleExit();
	return 0;
}

You can see where the time is set with the functions PCF85063_SetTime_YMD and PCF85063_SetTime_HMS. Let’s update this to use the date/time that the system is using. Place the following two lines above those two functions. This will only grab the system time and print it.

    time_t T = time(NULL);
    struct tm tm = *localtime(&T);

    printf("***System Date is: %02d/%02d/%04d***\n", tm.tm_mday, tm.tm_mon + 1, tm.tm_year + 1900);
    printf("***System Time is: %02d:%02d:%02d***\n", tm.tm_hour, tm.tm_min, tm.tm_sec);

Build and run the program again by typing the following two lines from the terminal.

sudo make -j 8
sudo ./main

This time the program will print the actual current date and time.

USE_DEV_LIB
Current environment: Debian
DEV I2C Device
DEV I2C Device
***System Date is: 20/03/2022***
***System Time is: 19:19:06***
21-2-28 23:59:58
21-2-28 23:59:59
21-3-1 0:0:0
21-3-1 0:0:1
21-3-1 0:0:2
21-3-1 0:0:3
21-3-1 0:0:4

Let’s pass in this information to the calls that set the date and set the time. The information that we need is in the tm structure. Note that in this structure the first month of the year is associated with the value 0. Also note that the tm structure stores the year as the number of years since 1900, while the RTC stores the year as the number of years since 2000. We need to shift the value by 100 to account for this difference. The updated lines of code look like the following.

    printf("***System Date is: %02d/%02d/%04d***\n", tm.tm_mday, tm.tm_mon + 1, tm.tm_year + 1900);
    printf("***System Time is: %02d:%02d:%02d***\n", tm.tm_hour, tm.tm_min, tm.tm_sec);
	PCF85063_SetTime_YMD(tm.tm_year - 100,tm.tm_mon + 1,tm.tm_mday);
	PCF85063_SetTime_HMS(tm.tm_hour,tm.tm_min,tm.tm_sec);

When you run the program again, you’ll see the current time. But how do we know the RTC is really retaining the time? One way is to run the program again with the calls that set the time commented out. One would expect the RTC to continue to show the real time based on the previous call. I tried this, and the RTC was printing out times from 01-01-01. Why did this happen?

I’ve not completely dissected the code, but I did fine that a call to PCF85063_init() at the beginning of main resets the clock. I just commented this out. With that call not being made, the time is retained. I use this call when setting the clock though. I’ve altered the program to accept a command line parameter. If setrtc is passed to the program as a command line argument it will set the time on the RTC. If setsystem is passed as the parameter then the program will attempt to set the system time. Setting the system time requires root privileges. If you try to set the time with this program without running as root then the attempt will fail.

The final version of this code is available in my GitHub account. You can find it here.

Fan Control

There’s a difference in the version A and version B for the fan control. On version A the fan is connected to port 18. It can be turned on and off by changing the state of this pin. For version B the fan is controlled through the I2C bus. Example code is also provided for fan control on version B. To download the fan code for version-B use the following commands from the terminal.

sudo apt-get install p7zip-full
sudo wget https://www.waveshare.com/w/upload/5/56/EMC2301_code.7z
7z x EMC2301_code.7z -O./
cd EMC2301_code

To build the code, use the following commands.

cd c
sudo make clean
sudo make -j 8
sudo ./main

Let’s look at a highly abridged version of the code.


		EMC2301_start();
	/*********************************/	
		EMC2301_setSpinUpDrive(60);
		EMC2301_setSpinUpTime(300);
		EMC2301_setDriveUpdatePeriod(100);
		EMC2301_RPMEnable();
			
		EMC2301_writeTachoTarget(8192);
		for(int i=0;i<10;i++)
		{
			EMC2301_fetchFanSpeed();
			DEV_Delay_ms(500);
		}

Fan control is straight forward. After some setup calls, the fan speed can be set by writing to EMC2301_writeTachoTarget(). The call to EMC2301_fetchFanSpeed() will read the current fan speed. Through repeated calls to this function you can see the acceleration of the fan when the speed is changed.

Other Hardware

Take note that a number of interfaces are disabled by default on the CM4. This includes the USB-C, the two DSI camera ports, and the display connector. If you need to use any of these, the resources page for this board has the information that needs to be added to the

Conclusion

Pi setup for this board was pretty easy. I’d definitely consider getting another one. If I had to do things all over again though I would double-check my cables. There was a moment when I thought things were not working because I wasn’t getting a video signal. It turns out that I had two HDMI cables close to each other that I thought was a single cable. I didn’t get a video signal because I had connected to a cable that was not terminating at my display (time to consider cable organization). This is a great board if you need a Pi that is close to the usual form factor but with more memory. I might consider another if I can acquire another CM4 (which is difficult in this chip shortage).

Resources


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Running WordPress on a NVIDIA Jetson or Raspberry Pi

As part of an exploration on hosting sites and services with a minimal hardware setup, I wanted to install WordPress on a Raspberry Pi. WordPress is an open-source software system for hosting sites and blogs. I’m trying it out because I thought it would be easy to install and setup and allow someone to manage posts without demanding they be familiar with HTML and other web technologies (though knowing them certainly helps). With the Raspberry Pi being an ARM based Linux computer, I also thought that these instructions might work on a NVIDIA Jetson with little alteration. When I tried it out, I found that these instructions work on the Jetson with no alteration needed at all. In this post I only show how to install WordPress and its dependencies. I’ll cover making the device visible to the Internet in a different post.

To get started, make sure that your Jetson or Raspberry Pi is up to date. Run the following two commands.

sudo apt-get update
sudo apt-get upgrade

These commands could take a while to run. Once they have finished, reboot your device.

Not to install the software. I’m connected to my device over SSH. You can run these commands directly through a terminal on the devicealso. But everything that I write is from the perspective of having access only to the terminal. We are going to install the Apache web server, a MySQL database, and a PHP interpreter.

Apache Web Server

To install the Apache Web Server, type the following command.

sudo apt-get install apache2

After running for a while, Apache should successfully install. You can verify that it is installed by opening a browser to your device’s IP address. From the terminal, you can do this with the following command.

lynx http://localhost

You should see the default Apache page display. To exit this browser press the ‘Q’ key on your keyboard and answer ‘y’ to the prompt.

Installing PHP

To install PHP on your device, use the following command.

sudo apt-get install php

With the PHP interpreter in place, we can add a page with some PHP code to see it processed.

Navigate to the folder that contains the Apache HTML content and add a new page named test-page.php

cd /var/www/html
sudo nano test-page.php

The file will have a single line as its content. Type the following.

<?php echo "Hey!"; ?>

You can now navigate to the page in a browser.

lynx http://localhost/test-page.php

Installing the Database

Maria Database is a mySQL database. It will contain the content for our site. Install it with the following command.

sudo apt-get install mariadb-server

The database is installed, but it needs to be configured. To access it, we need to setup a user account and a password. Decide what your user ID and password will be now. Also choose a name for the database. You will need to substitute my instances of USER_PLACEHOLDER, PASSWORD_PLACEHOLDER, and DATABASE_PLACEHOLDER with the names and passwords that you have chosen.

sudo mysql -uroot

You will be presented with the MariaDB prompt. Type the following commands to create your user account, database, and to give permission to the database.

CREATE USER 'USER_PLACEHOLDER'@'localhost' IDENTIFIED BY 'PASSWORD_PLACEHOLDER';
CREATE DATABASE DATABASE_PLACEHOLDER;
GRANT ALL ON DATABASE_PLACEHOLDER.* to 'USER_PLACEHOLDER'@'localhost';
quit;

We need to make sure that account can access the database. Let’s connect to the database using the account that you just created.

mysql -u USER_PLACEHOLDER -p

You will be prompted to enter the password that you choose earlier. After you are logged in, type the following to list the databases.

SHOW DATABASES;

A list of the databases will show, which should include a predefined system database and the one you just created.

We also need to install a package so that PHP and MySQL can interact with each other.

sudo apt-get install php-mysql

Installing WordPress

The downloadable version of WordPress can be found at wordpress.org/download. To download it directly from the device to the web folder use the following command.

sudo wget https://wordpress.org/latest.zip -O /var/www/html/wordpress.zip

Enter the folder and unzip the archive and grant permissions to Apache for the folder.

cd /var/www/html
sudo unzip wordpress.zip
sudo chmod 755 wordpress -R
sudo chown www-data wordpress -R

We are about to access our site. It can be accessed through the devices IP address at http://IP_ADDRESS_HERE/wordpress. As a personal preference, I would prefer for the site suffix to be something other than wordpress. I’m changing it to something more generic, “site”.

mv wordpress site

Now let’s restart Apache.

sudo service apache2 restart

From here on I am going to interact with the device from another computer with a desktop browser. I won’t need to do anything in the device terminal. Using a browser on another computer I navigate to my device’s IP address in the /site folder. The IP address of my device is 192.168.50.216. The complete URL that I use is http://192.168.50.216/site. When I navigate there, I get prompted to select a language.

A screenshot of the language selection screen on the Raspberry Pi. This is the first screen that you will encounter when WordPress is served from the Pi for the first time.
Word Press Language Prompt

The next page lets you know the information that you will need to complete the setup. That information includes

  • The database name
  • The database user name
  • The database password
  • The database host
  • The Table name prefix

The first three items should be familiar. The fourth item, the database host, is the name of the machine that has the database. Since we are running the database and WordPress from the same device this entry will be “localhost”. If we were running more than one site from this device to keep the databases separate, the tables for each instance could have a common prefix. I’m going to use the prefix wp_ for all of the tables. All of this information will be saved to a file named wp-config.php. If you need to change anything later your settings can be modified from that file.

These are the default settings for WordPress. The first three fields must be populated with the information that you used earlier.
Default WordPress Settings

Enter your database name, user name, and password that you decided earlier. Leave the host name and the table prefix with their defaults and click on “submit.” If you entered everything correctly, on the next screen you will be prompted with a button to run the installation.

WordPress prompt to run the installation. This shows after successfully configuring it to access the database.

On the next page you must choose some final settings of your Word Press configuration.

Final Setup Screen

After clicking on “Install WordPress” on this screen, you’ve completed the setup. With the instructions as I’ve written them, the site will be in the path /wordpress. The administrative interface will be in the path /wordpress/wp-admin. WordPress is easy to use, but a complete explanation of how it works could be lengthy and won’t be covered here.