USA Testing Emergency Alert System on 4 October 2023 around 2:20 pm

On 3 October 2023 around 2:20PM, the USA is testing its emergency alert system. The test will be broadcast over radio (including TV) and mobile phone. Expect phones to be blaring around you around this time. Don’t worry, this is only a test.

If you are likely to be in a situation where you cannot afford or tolerate your phone going off, then you might want to keep your phone powered off around this time. Some environments, such as courthouses, have rules on phones being in silent mode or turned off (I believe a phone going off in court in Atlanta can get someone in trouble for contempt of court). Even if you’ve muted all your settings on your phone, this alert might not respect those settings. While some phones expose settings to silence other alerts, the national alert system’s setting has been unalterable on the phones that I’ve examined over the years.

When the test goes off, don’t be alarmed. If you have one of those emergency tests radios, it might be a good opportunity to see how well it works.

Updating your Profiles in Cisco VPN Connect (MacOS)

Some years ago I worked with a client and had to install Cisco VPN Connect on my Mac. After the work was done, I uninstalled the client. Recently, I found myself needing the VPN with a different client. On reinstalling the software, all of the old settings from the previous client were still there and the VPN software refused to save the new connection URL. To get the client to work the way I needed, I had to update the profile manually.

One of the places where the Cisco Anytime Connect software saves information is /opt/cisco/anyconnect/profile. Navigating to that path in Terminal you will find a couple of files. The one of interest is Anyconnect-SAML.xml. This is an XML file that contains the connection settings. In addition to this file the software also remembers the last connection that it attempted to connect to. I don’t know where that information is stored, but that information won’t be needed for this change. The simplest way to address the connection problem is to rename this file. I say “rename” and not “delete” so that the information is available should you need it. Renaming has the same effect as deleting, but allows you to rollback. I changed the file to a name that had .backup on the end.

With the file effectively deleted, if you restart the Cisco Anytime VPN software, it will still show the last server that you connected to. Enter your new VPN URL and connect. After successfully connecting, the software will remember this URL and make it available the next time that you need to connect.

Setting a DLL Path at Runtime for P/Invoke

.Net applications can call functions from static DLLs using the [DllImport] attribute. This attribute has as its argument the name of the DLL in which the target is store. But what does one do if the location of the DLL is not in the paths that the system will search? First, let’s consider where the system looks for DLLs in the order that it searches for them.

  1. The Application Directory
  2. The System Directory
  3. The Windows Directory
  4. Current Directory
  5. Directorys in the PATH environment variable

If the target DLL isn’t in one of those folders, it won’t be found. There is a Win32 function that let’s an application set an additional folder in which the system will look for resolving a DLL location at runtime. The function has the signature HRESULT SetDllDirectory(LPWSTR pathname). When this method is called with a valid path the new search path is as follows.

  1. The Application Directory
  2. The Directory passed in SetDllDirectory()
  3. The System Directory
  4. The Windows Directory
  5. The Current Directory
  6. Directories in the PATH environment variable

The statement for adding a declaration for SetDllDirectory follows.

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool SetDllDirectory(string lpPathName);

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Customizing the Logitech/Saitek Flight Instrument Panel

Saitek (which was later acquired my Logitech) created flight instrument hardware that is primarily associated with Microsoft Flight Simulator. While there are various device types that they make, the one in which I had the most interest is the “Flight Instrument Panel.” It is a small LCD display that connects to the computer via a USB connector. It doesn’t appear that Logitech has made any changes to the hardware since it’s release; the device still uses a mini-USB connector.

I have some purposes for it beyond using it for Microsoft Flight Simulator. I wanted to perform some customization on the pane. After going through the setup, the panel begin to display information. By default it displays promotional information for other hardware until an application tells it to display something else. I’m not fond of advertisements on my idle devices and wanted to change these first. Thankfully this can be done without any programming. The default displays images are from jpg files that can be found in the file system after the device is setup. Navigate to C:\Program Files\Logitech\DirectOutput to see the files. Replace any one of them to alter what the screen displays.

Before purchasing a panel I searched for an SDK for it. I didn’t find an SDK, but I found that plenty of other people had software projects for it and figured I would be able to make it work. Only after getting the device setup did I find that the SDK was closer than I realized. Documentation for controlling the panel installs along side the panel. The group of APIs in the SDK are referred to as DirectOutput. No, that’s not one of Microsoft’s DirectX APIs (Like Direct3D, DirectInput, so on). That’s just the name Saitek selected for their SDK.

  1. The Application Directory
  2. The System Directory
  3. The Windows Directory
  4. Current Directory
  5. Directorys in the PATH environment variable

If the target DLL isn’t in one of those folders, it won’t be found. There is a Win32 function that let’s an application set an additional folder in which the system will look for resolving a DLL location at runtime. The function has the signature HRESULT SetDllDirectory(LPWSTR pathname). When this method is called with a valid path the new search path is as follows.

  1. The Application Directory
  2. The Directory passed in SetDllDirectory()
  3. The System Directory
  4. The Windows Directory
  5. The Current Directory
  6. Directories in the PATH environment variable

The statement for adding a declaration for SetDllDirectory follows.

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool SetDllDirectory(string lpPathName);

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Erasing an EPROM with Alternative Devices

I’ve come into possession of an EPROM and got a programmer for it. Writing data to it was easy. Erasing data is another matter. Note that I said EPROM and not EEPROM. What’s the difference? An The first E in EEPROM means “Electrically.” And Electrically Erasable Read Only Memory can be cleared by using some electric circuit. The EPROM I have must be erased through UV light. There is a window on the ceramic package that exposes the silicon underneath. With enough UV light through this window, this chip should be erased.

There are devices sold to specifically erase such memory. I’m not using those. Instead, I have a number of other UV sources to test with. These are

  • The Sun
  • A portable UV phone Cleaner
  • A Clamshell UV Phone Cleaner
  • A Tube Blacklight

I’m using a M27C256 32k EPROM. To know whether my attempt at erasing worked or not I needed to first put something on it. I filled the memory with binary digits counting from 0 to 255, repeating the sequence when I reached the end. The entire 32K was filled with this pattern. To produce a file with the pattern I wrote a few lines of code.

// See https://aka.ms/new-console-template for more information
byte[] buffer = new byte[0x7FFF];
for(int i = 0;i <buffer.Length;i++)
{
    buffer[i] = (byte)i;
}
using (FileStream fs = new FileStream("content.bin", FileMode.Create, FileAccess.Write))
{
    fs.Write(buffer, 0, buffer. Length);
}

Now to get the resultant file copied to the EPROM. The easiest way to do that is with a dedicated EPROM programmer. They are relatively cheap, easy to find, and versatile. I found one on Amazon that worked well for me. Using it was only a matter of selecting what type of EPROM I was using, selecting a file containing the content to be written, and selectin the program button.

The software for writing information to the EPROMs

Reading from the EPROM is just as simple. After the EPROM is connected to the programmer and the EPROM model is selected in the software, it provides a READ button that copies all the bytes from the memory device and displays them in the hex editor. To determine whether the EPROM had been erased I will use this functionality. Now that I have a way to read and write from the EPROM, let’s test the different means of erasure.

Using the Sun

These results were the most disappointing. After having an EPROM out for most of the day, the ROM was not erased. Speaking to someone else, I was told that it would take several days of exposure to erase the EPROM. I chose not to leave the EPROM out for this long, as I’d risk forgetting it was out there when the weather becomes more wet.

Using a Portable UV Sanitizer

The portable UV Sanitizer that I tried was received as a Christmas gift at the end of 2022. Such devices are widely available now in the wake of COVID. This unit charges with a USB cable and runs off of a battery. When turned on, it stays on until it is either turned off, the battery goes dead, or someone turns it over. This unit will only emit light when the light is facing downward. I speculate this is a safety feature; you won’t want to look directly into the EV light.

My first attempts to erase one of the EPROMs with this sanitizer were not successful. After several sessions, the EPROMs still had their data on them. While I wouldn’t look directly into the UV light I could point my camera at it safely. The picture was informative. The light had a brighter level on the end that was closer to the power source, and was very dim at the end. Before, I was only ensuring the window of the EPROM were under some portion of the lighting tube. Now, I knew to ensure it was close to the brighter end of the UV emitter. Using the new placement, I was able to erase an EPROM in about 60 minutes.

UV Sanitizer with the EPROM at the brighter end.

Provided that someone is only erasing a single EPROM and isn’t in a hurry, I think that this could make for an adequate solution for erasing an EPROM. If there’s more than one though his might not work as well, especially when one considers the time needed to recharge the battery after it has been diminished by an erasing session.

Clamshell UV Phone Cleaner

I received this clamshell UV phone cleaner as a gift nearly a decade ago. This specific model isn’t sold any more, but newer variations are available under the description PhoneSoap. These have a few advantages over the portable UV sanitizer. It runs from a 12 volt power source. There’s no waiting for it to recharge before you can use it. It also appears to be a lot brighter. The UV emitter automatically deactivates when the case is being opened, but there is a brief moment where the case is just being opened but the light hasn’t turned off yet in which some of the light spills out of the unit. It is either a lot brighter, or it has more light in the visible spectrum. The unit I use has emitters on both the hinged and the lower area of the case. EPROMs placed in it could be oriented face-up or face-down and still be erased. When this case is closed, the emitter turns on for 300 seconds and then turns off. I’d like for it to be longer for my purposes, but 300 seconds isn’t bad. After I let an EPROM sit for one 5-minute session in the sanitizer, it still has data on it. But after a second 5-minute session it showed as erased. I think this unit is worthy of consideration.

Tube UV Light

I have an old UV tube light that I purchased in my teens. I dug it up and found a power supply for it. The light still works, but after leaving an EPROM in direct contact with it for well over 24 hours I found no change. I speculated that this would be the outcome for a few reasons. Among which is that UV lights of this type are commonly where people can see them. The cleaning UV lights have warnings to keep them away from skin and eyes. From the glimpse that I got of them through the phone’s camera, it looks that they are working in a different wavelength. Not that this is a true measure of the true bandwidth. But there’s not much to be said about the tube light.

The Winner

The clear winner here is the clamshell UV light. It was easy to use and was able to erase the EPROM in ten minutes. The portable UV cleaner comes in second. The other sources didn’t cross the finish line given a generous amount of time to do so. It might be possible to eventually erase an EPROM with them, but I don’t think it is worth the time.

Now that I have a reliable way to erase these EPROMs, I can use these in the MC6800 Computer that I was working on.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Jameco Valuepro BB-4T7D 3220-Point Solderless Breadboard

The File System Watcher::Reloading Content Automatically

I was performing enhancements on a video player that read its content at startup, and then would serve from that content on demand. The content, though loaded from the file system, was being put in place by another process. Since the primary application only scanned for content at startup, it would not detect the new content until after the scheduled daily reboot.

The application needed a different behaviour. It needed to detect when content was updated and rescan content accordingly. There are several ways that this could be done, with the application occasionally scanning its content being among the most obvious solutions. There are better solutions. The one I am sharing here is the File System Watcher. I’ll be looking at using the implementations for NodeJS and .NET.

The File System Watchers keeps track of files in specific paths and notifies an application when a change of interest occurs. Once could watch an entire folder or watch only specific files. If any files change, the application receives a notification.

Let’s consider how this feature is used in NodeJS first. You’ll need to import the file system object. The file system object has a function named watch that accepts a file path. The object that is returned is used to received notifications when an item within that path is created or updated.

const fs = require('fs')
const readline = require('readline');
const path = require('path');

var watcher;
let watchPath = path.join(__dirname, 'config');
console.log(`Watch path: ${watchPath}`);
watcher = fs.watch(watchPath)
watcher.on('change', (event, filename)=> {
	console.log(event);
	console.log(filename);
});

console.log('asset watcher activated');

When a configuration file is change, how that is handled depends on the logic of how your application works.

In the .Net environment there’s a class named FileSystemWatcher that accepts a directory name and a file filter. The file filter is the pattern for the file names that you won’t considered. Use *.* to monitor for any file. You can also filter for notifications of the file attributes changing. Instances of FileSystemWatcher exposes several events for different types of file system events.

  • Renamed
  • Deleted
  • Changed
  • Created

When an event occurs, the application receives a FileSystemEventArgs object. It provides three properties about the change that has occurred.

  • ChangeType – Type of event that occurred
  • FullPath – The full path to the file system object affected
  • Name – the name of the file system object affected

These should tell you most of the information that you need to know the nature of the change.

Whether in NodeJS or .Net, using the file system watcher provides a simple and efficient method for detecting when vital files have been updated. If you decide to add features to your application to ensure it is responsive to changes in files, you’ll want to use it in your solutions.

Find the source code for sample apps here

https://github.com/j2inet/FileSystemWatcherDemo

.Net Sample App

The .Net Sample App monitors the executable directory for the files content.txt and title.txt. The application has a title area and a content area. If the contents of the files are changed, the application UI updates accordingly. I made this a WPF app because the binding features makes it especially easy to present the value of a variable with minimal custom code. I did make use of some custom base classes to keep the app-specific code simple.

using System;
using System.Collections.Generic;
using System.DirectoryServices;
using System.IO;
using System.Linq;
using System.Security.Policy;
using System.Text;
using System.Threading.Tasks;

namespace FileSystemWatcherSample.ViewModels
{
    public  class MainViewModel: ViewModelBase
    {
        public MainViewModel() {
            var assemblyFile = new FileInfo(this.GetType().Assembly.Modules.FirstOrDefault().FullyQualifiedName);
            var parentDirectory = assemblyFile.Directory;
            Directory.SetCurrentDirectory(parentDirectory.FullName);

            FileSystemWatcher fsw = new FileSystemWatcher(parentDirectory.FullName);
            fsw.Filter = "*.txt";
            fsw.Created += FswCreatedOrChanged;
            fsw.Changed += FswCreatedOrChanged;
            fsw.NotifyFilter = NotifyFilters.CreationTime | NotifyFilters.LastWrite | NotifyFilters.FileName;
            fsw.EnableRaisingEvents = true;
        }


        void FswCreatedOrChanged(object sender, FileSystemEventArgs e)
        {
            var name = e.Name.ToLower();
            switch (name)
            {
                case "contents.txt":
                    try
                    {
                        Content = File.ReadAllText(e.FullPath);
                    }catch(IOException exc)
                    {
                        Content = "<unreadable>";
                    }
                    break;
                case "title.txt":
                    try
                    {
                        Title = File.ReadAllText(e.FullPath);
                    } catch(IOException exc)
                    {
                        Title = "<unreadable>";
                    }
                    break;
                default:
                    break;
            }
        }

        private string _title = "<<empty>>";
        public string Title
        {
            get => _title;
            set => SetValueIfChanged(() => Title, () => _title, value);
        }

        string _content = "<<empty>>";
        public string Content
        {
            get => _content;
            set => SetValueIfChanged(()=>Content, ()=>_content, value);
        }
    }
}

Node Sample App

The Node Sample App runs from the console. In operation, it is much more simple than the .Net application. When a file is updated it prints a notification to the screen.

const fs = require('fs')
const readline = require('readline');
const path = require('path');


function promptUser(query) {
    const rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout,
    });

    return new Promise(resolve => rl.question(query, ans => {
        rl.close();
        resolve(ans);
    }))
}



var watcher;
let watchPath = path.join(__dirname, 'config');
console.log(watchPath);
watcher = fs.watch(watchPath)
watcher.on('change', (event, fileName)=> {
    console.log(event);
    console.log(fileName);
    if(fileName == 'asset-config.js') {
      targetWindow.webContents.send('ASSET_UPDATE', fileName);
    }
  })
  console.log('asset watcher activated');



var result = promptUser("press [Enter] to terminate program.")

Retro: Building a Motorola 6800 Computer Part 1

I was cleaning out a room and I came across a box of digital components. Among these components were a few ICs for microcontrollers and microprocessors. Seeing these caused me to revisit interest that I had in computer hardware during a time prior to me deciding on the path of a Software Engineer. I decided to make a simple, yet functional computer with one of the processors. I selected the Motorola 6808 from what was available. There was a more capable Motorola 68K among the ICs, but I decided on the 6808 since it would require less external components and would be a great starting point for building something. It could make for a great teaching aid for understanding some computer fundamentals.

MC6800 Series Hello World on YouTube

Hello World

The first thing I want to do with it is simple. I just want to get the processor in a state where it can run without halting. This will be my Hello World program equivalent. Often times with Hello World programs, the goal is simply to produce something that compiles runs without failing, and performs some observable action. Hello World programs validate that one’s build system is properly configured to begin producing something. The program itself is trivial.

About the 6800

This processor family is from before my time, initially made in 1974. The MC6800 series of processors comes in a few variants. They differ on their amount of internal ram, stand-by capabilities, and clock speed. These are small variations. I’m using the MC6808, but will refer to it as a 6800 since most of what I write here is applicable to all of these processors. This 8-bit processor has only a few registers to track, a 16-bit address line, and a few control lines. List any processor, it has a program counter and stack pointer. It also has an index register, and a couple of 8-bit accumulators. The Index, stack, and program counters are all 16-bit while the two accumulators are 8-bit.

The processor only natively performs integer math operations. But there is a library for floating point operations. In times past it had been distributed as an 8K ROM. But the source code for this library is readily available and could be place on someone’s own ROM. You can find the source code on GitHub.

MC6800 Block Diagram Image Credit: Wikipedia.org

Instruction Set

This processor has an instruction set of only 72 instructions. The instructions + operands range with a usual size of between 1 to 3 bytes. At this size and simplicity, even putting together a simple program without an assembler could be done. Many instructions are variations on the same high-level operation with a different addressing mode. For my task goal, I don’t need to get deep into understanding of the instruction. I just needed to know what is a 1 byte operation that I could do without any additional hardware or memory needed. Many processors support an instruction often called nop, standing for “No Operation.” This instruction, as its name suggest, does nothing beyond take up space. My plan was to hard-wire this instruction into the system. This would let it run without any RAM and without causing any faults or halting conditions.

For this processor, the numerical value for the nop instruction is 0x01. This is an easy encoding to remember. To wire this instruction in the circuit, I only need to connect the least significant bit of the processor’s data line to a high signal and tie the other ones to a low signal.

Detecting Activity

It is easy to think of a processor that is only executing nop instructions as doing nothing at all. This isn’t the case though. The processor is still incrementing its program bus. As it does, it is asserting the new address over the processor’s address lines to specify the next instruction that it is trying to fetch. Some output status lines will also indicate activity. The R/!W line will indicate read operations, the BA (Bus Address) line will be high when ever the processor isn’t halted, ant the VMA line will be high when the processor is trying to asset an address on the address bus. The processor also responds to some input lines. There are three input lines that have an effect on the processor when they are in the low state. RESET, HALT, and IRQ all effect execution. I’ll need to ensure those are tied low. Most important of all, the processor needs to receive a clock signal within an acceptable range. The clock signal is necessary for the processor to coordinate it’s actions . If the clock signal is too high or too low, then the process might not function correctly. That said, I’m going to intentionally try to run the processor at a rate that is lower than what is on the spec sheet for reasons to be discussed.

As the processor is running, I should be able to monitor what’s going on by monitoring a few lines, especially on the address line. If I connect light emitting diodes (LEDs) to the address lines then I should observe whether each connection is in a high or low state by seeing which LEDs are on or off. But with the processor running at a clock speed of 1MHz – 2MHz, the processor could go through its entire address space at a rate faster than I can perceive. If I run the clock at a reduced speed, then I might make the processor progress slow enough so that I can watch the address lines increment. To achieve this, I’m going to make a clock circuit and put the output through a counter IC. If you are familiar with digital counting circuits, you know that each binary digit will be changing at half the speed of the digit before it. I can use the output of the circuit to get the clock running at 1/2, 1/4, 1/8,…,1/256. I can get the clock into the kilohertz range, which would be slow enough to see the address lines increment.

The Circuit

For the clock circuit, I have a 4MHz crystal wired into a circuit with some inverters, resisters, and capacitors. I take the output of that and pass it through another inverter before passing it on to the processor (or the counter between the processor and clock).

For the processor, most of the work is connecting LEDs with resistors to limit the current. Additionally I’ve for the instruction 1 wired to the data bus. With this wired, the only thing the system needs is power.

The Outcome

I’m happy to say that this worked. The processor started running and I can see the address bus values increasing through the LEDs on the most significant bits.

Next Steps

Now that I have the processor in a working state, I want to replace the hard-wired instruction with an EPROM and add RAM. Once I’m confident that all is well with the EPROM and RAM then I’ll add some interfaces for the outside world. While the parts that I think that I’ll need are generally out of production (though there are some derivative processors still available new) used versions are available for only a few dollars. Overall though this is a temporary diversion. Once it is developed to a certain point, it will be shelved, but that’s not the end of my hardware exploration. There are some things I’d like to do with some ARMs processors (likely an STMF32 arm processor). Many of the ARMs processors I’ve looked at are fairly complete system-on-a-chip components and don’t require a lot of hardware to get them to their minimal working state beyond a clean power supply.

Resources

One of the nice things about dabbling in Retro Computing is that there are plenty of sources available for the hardware. If you find this interesting and want to try some things out yourself, here are some resources that may be helpful.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

7 Auto Makers Jointly Work to Expand EV Charging

BMW Group, GM, Honda, Hyundai, Kia, Mercedes-Benz, and Stellantis, are planning to engage in a joint venture to add EV chargers across the USA and Canada. This joint venture is dependent on regulatory approval and closing conditions. Their plan calls for more at least 30,000 chargers starting next year. The new chargers will support both CCS1 and NACS plugs (which, in North America translates into supporting Non-Tesla and Tesla vehicles). The new stations are to support the Plug-and-Charge protocol; this means that the charger and vehicle will communicate with each other to automatically charge the drive with the driver not having to do any more than connect the charger to their car.

Starting in 2024 the group says it plans to deploy chargers along major highway and metropolitan areas first. They plan to make use of National Eletric Vehicle Infrastructure (NEVI) funding being administered by the states to improve charging across major travel corridors. “the stations will be in convenient locations offering canopies where ever possible and amenities such as restrooms, food services, and retail operations either nearby or within the same complex.” This sounds a bit like they’ve re-invented the modern gas station, but with chargers instead of gas pumps. But this sounds like a significant improvement compared to some charging experiences, where the chargers may be in an isolated area of a parking lot with no rain cover and no buildings or restrooms nearby.

Some Auto Manufacturers Moving to Tesla Chargers

This announcement comes at the heels of several automakers announcing that they plan to transition from CCS1 chargers to support Tesla’s NACS. This includes Mercedes, Nissan, Rivian, Polestar, and Volvo. Though they made their announcements halfway through 2023, vehicles implementing the chargers are not expected until 2024 and 2025.

While I look forward to the expansion of EV charging availability, at the moment this announcement is aspirational. But it’s a space I plan to keep an eye on as I’m personally interested in seeing EV charging capabilities expand.

Statements from the Joint Venture Members

BMW Group CEO Oliver Zipse: “North America is one of the world’s most important car markets – with the potential to be a leader in electromobility. Accessibility to high-speed charging is one of the key enablers to accelerate this transition. Therefore, seven automakers are forming this joint venture with the goal of creating a positive charging experience for EV consumers. The BMW Group is proud to be among the founders.”

GM CEO Mary Barra: “GM’s commitment to an all-electric future is focused not only on delivering EVs our customers love, but investing in charging and working across the industry to make it more accessible. The better experience people have, the faster EV adoption will grow.”

Honda CEO Toshihiro Mibe: “The creation of EV charging services is an opportunity for automakers to produce excellent user experiences by providing complete, convenient and sustainable solutions for our customers. Toward that objective, this joint venture will be a critical step in accelerating EV adoption across the U.S. and Canada and supporting our efforts to achieve carbon neutrality.”

Hyundai CEO Jaehoon Chang: “Hyundai’s investment in this project aligns with our ‘Progress for Humanity’ vision in making sustainable transportation more accessible. Hyundai’s expertise in electrification will help redefine the charging landscape and we look forward to working with our other shareholders as we create this expansive high-powered charging network.”

Kia CEO Ho Sung Song: “Kia’s engagement and investment in this high-powered charging joint venture is set to increase charging access and convenience to current and future drivers and therefore accelerate the transition to EVs across North America. Kia is proud to be an important part of this joint venture with other reputable automakers as we embark on a journey towards seamless charging experiences for our customers and further strengthening Kia’s brand identity in the EV market.”

Mercedes-Benz Group CEO Ola Källenius: “The fight against climate change is the greatest challenge of our time. What we need now is speed – across political, social and corporate boundaries. To accelerate the shift to electric vehicles, we’re in favor of anything that makes life easier for our customers. Charging is an inseparable part of the EV-experience, and this network will be another step to make it as convenient as possible.”

Stellantis CEO Carlos Tavares: “We intend to exceed customer expectations by creating more opportunities for a seamless charging experience given the significant growth expected in the market. We believe that a charging network at scale is vital to protecting freedom of mobility for all, especially as we work to achieve our ambitious carbon neutrality plan. A strong charging network should be available for all – under the same conditions – and be built together with a win-win spirit. I want to thank each colleague involved, as it is a milestone example of our collective intelligence to listen and serve our customers.”


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Xamarin: “The Application cannot be launched because it is not installed”

Working on a Xamarin project for iOS from a Windows PC I ran into a situation where I could no longer debug the application. There had been no changes in source code from when I could debug to when I could not. A search for the error took me to other places where the problem had been discussed but not resolved. While I’ve been able to resolve the problem for myself, the other discussions were closed and I couldn’t place a resolution there. In the absence of another place to put this solution, I’m hosting it myself.

The more complete text of the error is as follows.

The application 'MyApplication' cannot be launched or debugged because it's not installed The app has been terminated.

Ofcourse, MyApplication would have the name of your application if you encounter this. While I don’t know what causes it, resolve it is a simple matter of erasing files. For my Xamrin project I’m using Visual Studio Community 2022 on a Windows Machine and communicating with an M1 Mac for compilation. On the M1, I had to navigate to the path $HOME/Library/Caches/Xamarin/mtbs/builds/ and erase the files and folders there. Returning to my solution on Windows, I got some other error about files not being found that was resolved by manually selecting dependency projects and recompiling those. After that, I was about to compile and debug the project like I could before.

I’m not sure what causes this error. I would have liked to have looked into it further. But delivery deadlines do not allow further examination. That said, there have been a few other low-frequency errors that I’ve encountered that are resolved by simply clearing this folder.

I hope that this solution is helpful to someone.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Enterprise Apple Certificates and Expiration

I recently explained the expiration behaviour of Apple Distribution certificates to someone, and thought it was worth sharing.

I often work on iOS applications signed with an Enterprise certificate. Applications signed with these certificates can be distributed directly to the device, such as through a Mobile Device Manager or through the browser. They cannot be distributed through the app store. These applications are signed with a distribution certificate. The Distribution Certificate can last up to one year, but may expire sooner. The distribution certificate will not last beyond the expiration of the account. If a app were signed by an account that has 7 months until renewal is needed, then the distribution certificate will also expire in 7 months.

Usually, this hasn’t been a problem for me. Many of the applications that I work on are either to be used for a predefined time period, such as for a holiday event, and then get shelved. Or they are applications that are receiving updates, in which case they will occasionally get new distribution certificates. I had a client that requested an iOS application be signed such that it would not expire. Someone in the development department for the client had resigned the application and redeployed it when it reached its first expiration period. But he wanted to be independent of their development department all together.

Unfortunately, this is not an option for iOS apps. The only way to have a version of the application that is immune to expiration would be to run it on an operating environment that doesn’t demand apps be signed with certificates that expire in a year or less. That is an option with Windows and Android, but not with iOS. For the best situation with iOS one needs an Mobile Device Manager (MDM). With an MDM, there is the option of making an updated distribution profile and pushing that out to the devices. Without the MDM then rebuild-and-redeploy is the only option.

This may be something that you’d like to consider when choosing hardware for a solution within an organization. iOS hardware is consistent in its form, performance, so on. While Android offers more openness, the variances in hardware is both an advantage and a disadvantage. I appreciate the ability to be able to make an app and install it to an Android device very quickly. OfCourse, the ability to do this easily also comes with the potential of bad actors doing the same. The barrier to getting malicious code on an iOS device is a bit higher.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Restoring Life to a Game Gear

Recently, the Game Gear of a friend ceased to function. As is the case with many old electronics, I suspected that the capacitors in the unit had gone bad. Electrolyte capacitors contain a fluid and given enough time, that fluid can evaporate. Between the chemistry of soldered in batteries reaching the end of their lives and capacitors drying out, some electronics are doomed to start from the beginning. Thankfully these components are not necessarily hard to replace. Before having taken possession of the Game Gear, I suspected it was the capacitors and got information on the values. I have a box of capacitors in the house already.

When I received the unit and tried turning it on, the unit has no response at all. This configuration had a bolted on batter pack that used a couple of 18650 batteries. The battery pack was dead, and refused to charge. Fixing this is pretty easy with the right tools. In addition to a couple of 18650 batteries had had a couple of metalic strips to connect them along with insulating shrink wrap to prevent the batteries from being electrically exposed.

The Game Gear itself had lots of potential points of failure. There are a lot of capacitors distributed throughout the unit. The device has three circuit boards. One circuit-board has the power components on it, another has the audio circuitry, and then there is the main board. All of these boards have capacitors on them. But I thought most likely the ones on the power circuit board were my culprits. Rather than testing them, I replaced all three. My repair actions stopped there because the unit was restored to full functionality once those were placed.

Having opened the Game Gear though I found that its construction is fairly straight forward. I decided to start looking at some other old video game systems that I have within the house. When I had some of these as a child, how ever they worked was magical to me! Looking at them now, I see them as something that I can understand and manipulate or modify. That lead to a quick examination of the circuit schematics and the DRM that each one of these units used. Of all of the units I considered, the original Game Box and some of it’s derivatives (GameBoy Color, GameBoy Pocket) appears to be one of the easiest devices to target. I’m thinking of setting up a development environment for one, writing a “hello world” program, writing it to a cartridge, and seeing it run. I’ll be writing more about that here.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

In-App Static Web Server with HttpListener in .Net

I was working on a Xamrin iOS application (using .Net) and one of the requirements was for the application to support a web view for presenting another form. The form would need to be served from within the application. There are lots of ways that one could accomplish this. For the requirements this only needed to be a static web server. The contents would be delivered via a zip file. Creating a static web server is pretty easy. I’ve created one before. Making this one would be easier.

What made this one so easy is that .Net provides the HttpListener class, which handles most of the socket/network related things for us. It will also parse out information from the incoming request and we can use it to generate a well formatted supply. It contains no logic for what replies should be sent for what circumstances, or for retrieving files from the file system, so on. That’s the part I had to build.

I was given an initial suggestion of getting the Zip file, using the .Net classes to decompress it and write it to the iPad’s file system, and retrieve the files from there. I started with that direction, but ended up with a different solution. Since the amount of data in the static website would be small, I thought it would be fine to leave it in the compressed archive. But if I changed my mind on this I wanted to be able to make adjustments with minimal effort.

Receiving Connections

To receive connections, the TcpListener class needs to know the prefix strings for requests. This prefix will usually contain http://localhost with a port number, such as http://localhost:8081/. It must end with the slash. Multiple prefixes can be specified. If you want the server to listen on all adapters for a specific port localhost could be replaced with * here. After creating a HttpListener these prefixes must be added to the listener’s Prefix collection.

String[] PrefixList
{
    get
    {
        return new string[] { "http://localhost:8081/",  "http://127.0.0.1:8081/", "http://192.168.1.242:8081/" };
    }
}

void ListenRoutine()
{
    _keepListening = true;
    listener = new HttpListener();
            
    foreach (var prefix in PrefixList)
    {
        listener.Prefixes.Add(prefix);
    }
            
    listener. Start();
    //...more code follows
}

The listener is ready to start listening for requests now. A call to TcpListener::GetContext() will block until a request comes in. Since it blocks, everything that I’m doing with the listener is on a secondary thread. I use the listener in a loop to keep replying to requests. The HttpListenerContext object contains an object representing the request (HttpListenerRequest) and the response (HttpListenerResponse). From the request, I am interested in the AbsolutePath of the request. This is the request URL Path with any query parameters removed. I’m also interested in the verb that was used on the request. For the server that I made I’m only handling GET requests.

while (_keepListening)
{
    //This call blocks until a request comes in
    HttpListenerContext context = listener.GetContext();
    HttpListenerRequest request = context.Request;
    HttpListenerResponse response = context. Response;


    ///Handle the request here

}
listener. Stop();

Let’s say that I wanted my server to return a hard coded response. I would need to know the size of that response in bytes. There is an OutputStream on the HttpListenerResponse object that I will write the entirety of my response to. Before I do, I set the ContentLength64 member of the HttpListenerResponse object.

async void HandleResponse(HttpListenerRequest request, HttpListenerResponse response)
{
    String responseString = "<html><body>Hello World</body></html>";
    byte[] responseBytes = System.Text.Encoding.UTF8.GetBytes(responseString);
    response.ContentLength64 = responseBytes.Length;
    var output = response.OutputStream;
    await output.WriteAsync(responseBytes, 0, responseBytes.Length);
    await output.FlushAsync();
    output. Close();
}

When I run the code now and navigate to the URL, I’ll see the text “Hello World” in the browser. But I want to be able to send more than just a hardcoded response. To make the server more useful it needs to send the property Mime Type header for certain content. I need to be able to easily change the content that it servers. To satisfy this goal I’ve externalized the data from the program and I’ve defined an interface to aid in adding new ways for the server to respond to the request. I’ll also want to be able to define other classes with different behaviours for requests. For those classes I’ve made the interface IRequestHandler. It defines two methods and two properties that the handlers must implement.

  • Prefix – this is a path prefix for the handler. It will only be considered as a class that can handle a response if the request’s absolute path starts with this prefix. If this field is an empty string then it can be considered for any request.
  • DefaultDocument – if no file name is specified in the path, then this is the document name that will be used.
  • CanHandleRequest(string method, string path) – This gives the class basic information on the request. If the class can handle the request it should return true from this method. If it returns false, it will no be given the request to process.
  • HandleRequest(HttpListenerRequest, HttpListenerResponse) – processes the actual request.

A list of these handlers will be made and added to a list. Each handler is considered for be given the request to handle one at a time until one is found that is appropriate for the request. When one is, it processes the request and no further handlers are considered. One of the handlers that I defined is the FileNotFoundHandler. It is the simplest of the request handlers. It can handle anything. Later, I’ll set this up as the last handler to be considered. If nothing else handles a request, thisn my FileNotFoundHandler will run.

public class FileNotFoundHandler : IRequestHandler
{
    public string Prefix => "/";

    public string DefaultDocument => "";

    public bool CanHandleRequest(string method, string path)
    {
        return true;
    }

    public async void HandleRequest(HttpListenerRequest request, HttpListenerResponse response)
    {
        String responseString = $"<html><body>Cannot find the file at the location [{request.Url.ToString()}]</body></html>";
        byte[] responseBytes = System.Text.Encoding.UTF8.GetBytes(responseString);
        response.StatusCode = 404;
        response.ContentLength64 = responseBytes.Length;
        var output = response.OutputStream;
        await output.WriteAsync(responseBytes, 0, responseBytes.Length);
        await output.FlushAsync();
        output. Close();
    }
}

Going back to the local server, I’m adding a list of IRequestHandler objects. The list will start with only the FileNotFoundHandler in it. Any other handlers added will be added at the front of the list, pushing everything back by one position. The last item added to the list will receive the highest priority.

List<IRequestHandler> _handlers = new List<IRequestHandler>();

public LocalServer(bool autoStart = false) {
    var fnf = new FileNotFoundHandler();
    AddHandler(fnf);
    if(autoStart)
    {
        Start();
    }
}

public void AddHandler(IRequestHandler handler)
{
    _handlers. Insert(0, handler);
}

void ListenRoutine()
{
    _keepListening = true;
    listener = new HttpListener();
            
    foreach (var prefix in PrefixList)
    {
        listener.Prefixes.Add(prefix);
    }
            
    listener. Start();
    while (_keepListening)
    {
        //This call blocks until a request comes in
        HttpListenerContext context = listener.GetContext();
        HttpListenerRequest request = context. Request;
        HttpListenerResponse response = context. Response;
        bool handled = false;
        foreach(var handler in _handlers)
        {
            if(handler.CanHandleRequest(request.HttpMethod, request.Url.AbsolutePath))
            {
                handler.HandleRequest(request, response);
                handled = true;
                break;
            }
        }
        if (!handled)
        {
            HandleResponse(request, response);
        }
    }
    listener. Stop();

}

This completes the functionality of the server itself, but I still need a handler. I mentioned earlier I wanted to serve content from a zip file. To do this I made a new handler named ZipRequestHandler. Some of the functionality that it will need will likely be part of almost any handler. I’ll put that functionality in a base class named RequestHandlerBase. This base class will define a DefaultDocument of index.html. It is also able to provide mime types based on a file extension. To retrieve mime types I have a string dictionary that maps an extension to a mimetype. Within the code I define some basic mime types. I don’t want all the mimetypes to be defined in source code. I have a JSON file that has a total of about 75 mime types in it. If that file were omitted for some reason the server would still have the foundational mime types provided here.

static StringDictionary ExtensionToMimeType = new StringDictionary();

static RequestHandlerBase()
{

            
    ExtensionToMimeType.Clear();
    ExtensionToMimeType.Add("js", "application/javascript");
    ExtensionToMimeType.Add("html", "text/html");
    ExtensionToMimeType.Add("htm", "text/html");
    ExtensionToMimeType.Add("png", "image/png");
    ExtensionToMimeType.Add("svg", "image/svg+xml");
    LoadMimeTypes();
}

        static void LoadMimeTypes()
        {
            try
            {
                var resourceStreamNameList = typeof(RequestHandlerBase).Assembly.GetManifestResourceNames();
                var nameList = new List<String>(resourceStreamNameList);
                var targetResource = nameList.Find(x => x.EndsWith(".mimetypes.json"));
                if (targetResource != null)
                {
                    DataContractJsonSerializer dcs = new DataContractJsonSerializer(typeof(LocalContentHttpServer.Handler.Data.MimeTypeInfo[]));
                    using (var resourceStream = typeof(RequestHandlerBase).Assembly.GetManifestResourceStream(targetResource))
                    {
                        var mtList = dcs.ReadObject(resourceStream) as MimeTypeInfo[];
                        foreach (var m in mtList)
                        {
                            ExtensionToMimeType[m.Extension.ToLower()] = m.MimeTypeString.ToLower();
                        }
                    }

                }
            } catch
            {

            }
        }

Getting a mime type is a simple dictionary entry lookup. We will see this used in the child class ZipRequestHandler.

public static string GetMimeTypeForExtension(string extension)
{
    extension= extension.ToLower();
    if(extension.Contains("."))
    {
        extension = extension.Substring( extension.LastIndexOf("."));
    }
    if(extension.StartsWith('.'))
        extension = extension.Substring(1);
    if(ExtensionToMimeType.ContainsKey(extension))
    {
        return ExtensionToMimeType[extension];
    }
    return null;
}

The ZipRequestHandler accepts either a path to an archive or a ZipArchive object along with a prefix for the requests. Optionally someone can set the caseSensitive parameter to disable the ZipRequestHandler‘s default behaviour of making request case sensitive. I’ve defined a decompress parameter too, but haven’t implemented it. When I do, this parameter will be used to decide if the ZipRequestHandler will completely decompress an archive before using it or keep the data compressed in the zip file. The two constructors are not substantially different. Let’s look at the one that accepts a string for the path to the zip file.

ZipArchive _zipArchive;
readonly bool _decompress ;
readonly bool _caseSensitive = true;
Dictionary<string, ZipArchiveEntry> _entryLookup = new Dictionary<string, ZipArchiveEntry>();

public ZipRequestHandler(String prefix, string pathToZipArchive, bool caseSensitive = true, bool decompress = false):base(prefix)
{
    FileStream fs = new FileStream(pathToZipArchive, FileMode.Open, FileAccess.Read);
    _zipArchive = new ZipArchive(fs);            
    this._decompress = decompress;
    this._caseSensitive = caseSensitive;
    foreach (var entry in _zipArchive.Entries)
    {
        var entryName = (_caseSensitive) ? entry.FullName : entry.FullName.ToLower();
        _entryLookup[entryName] = entry;
    }
}

public override bool CanHandleRequest(string method, string path)
{
    if (method != "GET") return false;
    return Contains(path);
}

Given the ZipArchive I collect the entries in the zip and their path. When request come in I’ll use this to jump straight to the relevant entry. The effect of the caseSensitive parameter can be seen here. If the class is intended to run case insensitive, then I convert file names to lower case. For later lookups, the search name specified will also be converted to lower case. Provided that a request is using the GET verb and requests a file that is contained within the archive this class will report that it can handle the request.

Ofcourse, the handling of the request is where the real work happens. A request may have query parameters appended to the end of it. We don’t want those for locating a file. Url.AbsolutePath will give the request path with the query parameters removed. If the URL path is for a folder, then we append the name of the default document to the path. we also remove any leading slashes so that the name matches the path within the ZipArchive. While I use TryGetValue on the dictionary to retrieve the ZipEntry, this should always succeed since there was an earlier check for the presence of the file through the CanHandleRequest call. We then get the mimeType for the file using the method RequestHandlerBase::GetMimeTypeForExtension. If a mimetype was found then the value for the header Content-Type is set.

The rest of the code looks similar to the code that was returning the hard coded responses. The ZipEntry abstracts away the details of getting a file out of a ZipArchive so nicely that it looks like reading from any other stream. The file is read and sent to the requester.

public override void HandleRequest(HttpListenerRequest request, HttpListenerResponse response)
{
    var path = request.Url.AbsolutePath;

    if (path.EndsWith("/"))
        path += DefaultDocument;
    if (path.StartsWith("/"))
        path = path.Substring(1);

    if (_entryLookup.TryGetValue(path, out var entry))
    {
        var mimeType = GetMimeTypeForExtension(path);
        if(mimeType != null)
        {
            response.AppendHeader("Content-Type", mimeType);
        }
        try
        {
            var size = entry.Length;
            byte[] buffer = new byte[size];
            var entryFile = entry.Open();
            entryFile.Read(buffer, 0, buffer.Length);

            var output = response.OutputStream;
            output.Write(buffer, 0, buffer.Length);
            output.Flush();
            output.Close();
        }catch(Exception exc)
        {

        }
    }
    else
    {
                
    }
}

The code in its present state meets most of the current needs. I won’t be sharing the final version of the code here. That will be in a private archive. But I can share a version that is functional. You can find the source code on GitHub at the following address.

https://github.com/j2inet/LocalStaticWeb.Net


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Hashing String Data in JavaScript, C#, C++, and SQL Server

I’m working with some data that needs to be hashed in both C# and JavaScript. Usually converting an algorithm across languages is pretty trivial. But in JavaScript the regular numeric type is a double-precision 64-bit number. While this sounds sufficiently large, when used as an integer this only provides 53-bits of precision. As you might imagine, using a 53-bit numeric type on one system and 64-bit on another would result in differences in outcome. This would make hased data between these two functions incompatible with each other. To avoid these potential problems, I needed to use a different type. I used BIGINT.

A potential issue with BIGINT is that it can accommodate extremely large values. This isn’t usually a problem, but I need to have identical behaviour for the hash function to have identical results across the languages. Fixing this is simple though. I only need to perform the bitwise AND operation to truncate any bits in the BIGINT beyond position 64. The hast function I’m using was originally found on StackOverflow. This might not be the final Hash function that I use, but for now it works.

A key thing to note in the JavaScript implementation is the n suffix on the numbers. This ensures that they are all using the BIGINT type. Also take note of the bitwise operation with the number 0xFFFFFFFFn. This ensures that the number is truncated and acting like a 64-bit integer.

// 
function hashString(s) { 
    const A =  54059n ;
    const B = 76963n ;
    const C = 86969n;
    const FIRSTH = 37n;
   var h = FIRSTH;
   for ( var i=0;i<s.length;++i) {
        var c = BigInt(s.charCodeAt(i));
        h = ((h * A) ^ (((c) *B))) & 0xFFFFFFFFFFFFFFFFn;
   }
   return h; 
}

The C++ implementation (used for the Arduino ) follows. Using native types in C there’s nothing special that needs to be done.

#define A 54059   /* a prime */
#define B 76963   /* another prime */
#define C 86969   /* yet another prime */
#define FIRSTH 37 /* also prime */
unsigned long hash_str(String s) {
  unsigned long h = FIRSTH;
  for (auto i = 0; i < s.length(); ++i) {
    h = ((h * A) ^ (s[i] * B)) & 0xFFFFFFFFFFFFFFFF;
    //s++;
  }
  return h;  
}

The difference between the C# and C++ versions o the code are only notational. They both handle 64-bit integers just fine with no special tricks needed.

ulong hashString(String s) { 
    const ulong A =  54059ul ;
    const ulong B = 76963ul ;
    const ulong C = 86969ul;
    const ulong FIRSTH = 37ul;
   var h = FIRSTH;
   var stringBytes = Encoding.ASCII.GetBytes(s);
   for ( var i=0;i<stringBytes.Length;++i) {
        var c = stringBytes[i];
        h = ((h * A) ^ (((c) *B))) & 0xFFFFFFFFFFFFFFFFul;
   }
   return h; 
}

The differences for Kotlin are also notational, but significantly different from the C# and C++ in how the bitwise operators are expressed.

    fun hashString(s:String): ULong {
        val A:ULong =  54059u ;
        val B:ULong = 76963u ;
        val C:ULong = 86969u;
        val FIRSTH:ULong = 37u;
        var h = FIRSTH;
        var stringBytes = s.toByteArray()
        for ( i in 0..stringBytes.size-1) {
            var c = stringBytes[i].toULong();
            h = ((h * A) xor (((c) * B))) and 0xFFFFFFFFFFFFFFFFu;
        }
        return h;
    }

After having written this post, I was working in SQL Server. I was going to save some of this hashed data within SQL Server and decided to try with implementing a hash function there. Everything started out the same, but I ran into a notable problem. I encountered arithmetic overflow issues with declaring the mask 0xFFFFFFFFFFFFFFFF. This mask isn’t strictly necessary, but I’ve placed it there should I happen to use one of these implementations to hash to a smaller data type. I was using the BIGINT data type. But that data type only provides 63-bits of precision, not 64. Knowing that now I could just use a smaller mask to have a hash function that works identically across environments. If you’d like to try it out, the SQL Server implementation follows here.

CREATE FUNCTION HashString
(
	@SourceString as VARCHAR(15)
)
RETURNS BIGINT
AS
BEGIN
    DECLARE @A BIGINT =  54059
    DECLARE @B BIGINT = 76963
    DECLARE @C BIGINT = 86969
    DECLARE @FIRSTH BIGINT = 37
	DECLARE @StrLEn BIGINT = LEN(@SourceString)	
	DECLARE @Index BIGINT = 1
	DECLARE @MASK BIGINT = 0xFFFFFFFFFFFF
	DECLARE @Letter CHAR
	DECLARE @LetterCode BIGINT
	DECLARE @H BIGINT = @FIRSTH
	WHILE @Index <= @StrLEn	
	BEGIN
		SET @Letter = SUBSTRING(@SourceString, @Index, 1)
		SET @LetterCode = UNICODE(@Letter)
		SET @H = ((@H * @A) ^ (@LetterCode * @B)) & @MASK
		SET @Index = @Index + 1		
	END	
	return  @H;
END
GO

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Gaming Through Netflix

When I tell someone that I’m about to play a game on Netflix, the response is the same.

“A game on Netflix, what are you talking about?”

I guess this isn’t well know, but Netflix publishes and licenses video games. most of these have been casual games though. My attention was caught by a recent game made available through Netflix that was an action game. “Teenage Mutant Ninja Turtles: Shredder’s Revenge” was recently made available on a range of systems. The look to the game reminds me of some previous TMNT games that I enjoyed, and I wanted to get this one. I never got around to making a purchase because I found the game on Netflix. Great! But how exactly does someone play a game on Netflix? How does someone play **this** game, which allows up to 4 people to play at once! Let’s find out.

Netflix releases games as mobile games. If you have an iOS or Android device, then you have what it takes to play the games. You can usually find the games by searching for “Netflix” in either of the app stores. But I find it easier to open the mobile app and scroll down vertically until you find a section listing the games. Selecting one of the games shows more information on it and presents a button to open the store for installing it. On Android, if the game is already installed, this will but a button to open and play the game instead.

Of course, the games can be played right away without any accessories. I personally hate playing arcade or action games on phone with on screen controls. Thankfully, one is not limited to that as the only form of control. The game supports game controllers, and you may be able to play on a larger external display (such as a TV) if using additional accessories.

Controllers

Both iOS and Android support game controllers. On iOS, you only need to pair the controller with the phone using the Bluetooth settings. On Android, you can either pair through the Bluetooth settings or you can connect it to the phone with a USB cable. I preferred this method so that I did not have to unpair the controller with the other device with which I use it. Recent Xbox controllers use USB-C as their connector. If you have an older controller, it uses USB-micro. You’ll need a USB-C to USB-micro cable. I preferred to use the shortest cable possible. I also use a USB-C right angle adapter to keep the cable a little neater.

External Display

You can play the games on an external display too. Well, maybe. It depends on your phone. Many Samsung devices will work with generic USB-C to HDMI adapters. If you have some other Android device, it may or may not support USB-C. Some iOS devices will work with HDMI adapters too. You just need to have an appropriate device to match either your Lightning port or USB-C port (I used this Lightning to HDMI adapter). Using an external display tends to drain the battery faster. It’s a good idea to use an adapter that also allows charging. With some of my Samsung devices I have found this can be tricky. The Samsung devices use USB-PD (Power Delivery). The device request some amount of power from the power supply. It the phone detects a different in the amount requested and the amount received, the phone will alert the user that there is potentially moisture on the USB-C port. Instead of using a PD power supply I had better results using a “dumb” power supply when pairing with the display adapter or using an HDMI adapter labeled as working with Samsung DEX. The video adapter that I preferred to use for Android was sold under the description of a USB-C Thunderbolt 3 Docking Station. My phone and tablet do not support Thunderbolt, but this doesn’t matter, the docking station works fine.

Multi-Player

I had hoped that I could connect multiple controllers to my Android device to play with multiple people. Sadly, this isn’t the case. The multiplayer works over the Internet with each person having a device. When you are starting the game, the character select screen also has a button to “Party Up” with two modes of “partying.” One can either create a private party or a public party. For the private party, a 6 character text string displays on the screen. Others that you want to join the party need to enter this code. If you select a public party then you can join up with others that wish to play online. For either option, you can either create a party, or you can join one.

Transferring Progress Between Devices

As you might expect, with Netflix games your progress is saved with your profile. If you go to a different device but use the same profile your progress shows up there. There’s nothing that you need to do. This is automatic.

Will It Replace my Game Streaming Service?

No. In its current form, Netflix Games are not going to replace Xbox or Luna streaming. But they don’t try to fill that space. Many of the games in their library are more casual game. At the time that I’m writing this, there are a total of about 50 games in the Netflix gaming library. While they wouldn’t be competing those larger game streaming service, the games do have their own charm to them. There are probably about 5 games that have support for controllers, including TMNT, Spiritfarer, and Stranger Things 3. Right now their games lean more casual (which to me makes sense, since that may be of broader interest). I do think that it is a space to watch as Netflix continues to find ways to grow.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Calculating the Distance Between Geographical Coordinates in Kotlin

There’s an equation I’ve often found useful and have generally used it for calculating the distance between geographical coordinates. Most recently, I used the equation in a program for a 360 interactive video player to find the distance between an area that a user selected and some point of interest. Fundamentally it is an equation measuring distances on a sphere and has many uses.

I was adjusting the source code to be used in an Android application, and thought that the code might be useful to others. I am reposting it here. I tend to work in SI units, but you could use this for miles, yards, inches, or another unit if you have the radius of the sphere of interest. The constants defined in the class provide the radius of the rarth in miles, kilometers, and meters. One of these values (or your own custom value) must be passed to have the results returned to be scaled for those units.

class DistanceCalculator {
     companion object {
         public val EarthRadiusInMiles = 3956.0;
         public val EarthRadiusInKilometers = 6367.0;
         public val EarthRadiusInMeters = EarthRadiusInKilometers*1000;
     }

     fun ToRadian(`val`: Double): Double {
         return `val` * (Math.PI / 180)
     }

     fun ToDegree(`val`: Double): Double {
         return `val` * 180 / Math.PI
     }

     fun DiffRadian(val1: Double, val2: Double): Double {
         return ToRadian(val2) - ToRadian(val1)
     }

     public fun CalcDistance(p1: coordinate, p2: coordinate): Double {
         return CalcDistance(
             p1.latitude,
             p1.longitude,
             p2.latitude,
             p2.longitude,
             EarthRadiusInKilometers
         )
     }

     fun Bearing(p1: coordinate, p2: coordinate): Double? {
         return Bearing(p1.latitude, p1.longitude, p2.latitude, p2.longitude)
     }

    fun Bearing(lat1: Double, lng1: Double, lat2: Double, lng2: Double): Double? {
        run {
            val dLat = lat2 - lat2
            var dLon = lng2 - lng1
            val dPhi: Double = Math.log( Math.tan(lat2 / 2 + Math.PI / 4) / Math.tan(lat1 / 2 + Math.PI / 4) )
            val q: Double =
                if (Math.abs(dLat) > 0) dLat / dPhi else Math.cos(lat1)
            if (Math.abs(dLon) > Math.PI) {
                dLon = if (dLon > 0) -(2 * Math.PI - dLon) else 2 * Math.PI + dLon
            }
            //var d = Math.Sqrt(dLat * dLat + q * q * dLon * dLon) * R;
            return ToDegree(Math.atan2(dLon, dPhi))
        }
    }

    public fun CalcDistance(
         lat1: Double,
         lng1: Double,
         lat2: Double,
         lng2: Double,
         radius: Double
     ): Double {
         return radius * 2 * Math.asin(
             Math.min(
                 1.0, Math.sqrt(
                     Math.pow(
                         Math.sin(
                             DiffRadian(lat1, lat2) / 2.0
                         ), 2.0
                     )
                             + Math.cos(ToRadian(lat1)) * Math.cos(ToRadian(lat2)) * Math.pow(
                         Math.sin(
                             DiffRadian(lng1, lng2) / 2.0
                         ), 2.0
                     )
                 )
             )
         )
     }
 }

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet