Updating Christmas Gifts After they Are Sent

Christmas is around the corner. Among the items of interest this year is the Analogue Pocket. The Pocket is an FPGA based device that can hardware emulate a lot of older game consoles along with having some games of its own. I’m getting one prepared for someone else. But I also need to send the device soon to ensure that it arrives at its destination before Christmas. This creates a conflict with getting more games loaded while also shipping it on time. No worries, I can satisfy both if I send the device with something to upload the content.

This is done by a lot of physical game releases, when there is a zero-day “patch” for a game or when the disc is only a license for the game, but the actual game and it’s content are only available online. I’ll be shipping the memory card for the device with an call to action to run the “game installer” on the memory card. After the card is mailed, I can take care of preparing the actual image. The game installer will reach out to my website to find a list of files to download to the memory card, zip files to decompress, or folders to create.

Safety

Though I’m the only one that will be making payloads for my downloader to run, I still imagined some problem scenarios that I wanted to make impossible or more difficult. What if someone were to modify the download so that it were to target writing files to a system directory or some other location? I don’t want this to happen. I’ve made my downloader so that it can only write to the folder in which it lives and to subfolders. The characters that are needed to get to some parent level or to some other drive, if present in the download list, will intentionally cause the application to crash.

Describing an Asset

I started with describing the information that I would need to download an asset. An asset could be a file, a folder, or a zip file. I’ve got an enumeration for gflagging these types.

    public enum PayloadType
    {
        File,
        Folder,
        ZipFile,
    }

Each asset of this type (which I will call a “Payload” from hereon) can be described with the following structure.

    public class PayloadInformation
    {
        [JsonPropertyName("payloadType")]
        [JsonConverter(typeof(JsonStringEnumConverter))]
        public PayloadType PayloadType { get; set; } = PayloadType.File;

        [JsonPropertyName("fileURL")]
        public string FileURL { get; set; } = "";

        [JsonPropertyName("targetPath")]
        public string TargetPath { get; set; } = "";

    }

For files and zip archives, the FileURL property contains the URL to the source. The TargetPath property contains a relative path to where this payload item should be downloaded or unzipped to. A download set could have multiple assets. I broke up the files for the the device that I was sending into several Zip files. Sorry, but in the interest of not inundating my site with several people trying this out, I’m not exposing the actual URLs for the assets here. The application will be grabbing a collection of these PayloadInformation items.

    public class PayloadInformationList: List<PayloadInformation>
    {
        public PayloadInformationList() { }
    }

The list of assets is placed in a JSON file and made available on a web server.

[
  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Pocket.zip",
    "targetPath": "",
    "versionNumber": "0"
  },


  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_1.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_2.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_3.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_4.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },
  {
    "payloadType": "Folder",
    "targetPath": "Memories/Save States",
    "versionNumber": "1"
  },
  {
    "payloadType": "Folder",
    "targetPath": "Assets",
    "versionNumber": "1"
  }
]

I might use some form of this again someday. So I’ve placed the initial URL from which the download list is retrieved in the Application Settings. In the compiled application, the application settings are saved in a JSON file that can be altered with any text editor.

About the Interface

The user interface for this application is using WPF. I grabbed a set of base classes that I often use with WPF applications. It made this using a build of Visual Studio that was just released a month ago that contains significant updates. I found that my base class nolonger works as expected under this new version of Visual Studio. That’s something I will have to tackle another day, as I think that there is a change in the relationship between Linq Expressions and Member Expressions. For now, I just used a subset of the functionality that the classes offerd. Most of the work done by the application can be found in MainViewModel.cs.

To retrieve the list of assets, I have a method named GetPayload() that downloads the JSON containing the list of files and deserializing it. Though I would usually use JSON.Net for serialization needs, I used the System.Text.Json.Serializer for my needs. Here, I also check the paths for characters indicating an attempt to go outside of the application’s root directory and thrown an exception of this occurs.

async Task<List<PayloadInformation>> GetPayloadList()
{
    HttpClient client = new HttpClient();
    var response = await client.GetAsync(DownloadUrl);
    var stringContent = await response.Content.ReadAsStringAsync();
    var payloadList = JsonSerializer.Deserialize<List<PayloadInformation>>(stringContent);
    payloadList.ForEach(p =>
    {
        if (!String.IsNullOrEmpty(p.TargetPath))
        {
            if (p.TargetPath.Contains("..") || p.TargetPath.Contains(":") ||
                p.TargetPath.StartsWith("\\") || p.TargetPath.StartsWith("/")
            )
           {
               throw new Exception("Invalid Target Path");
            }
        }
    });
    return payloadList;
}

Within MainViewModel::DownloadRoutine() (which runs on a different thread) I step through the payload descriptions one at a time and take action for each one. For folder items, the application just creates the folder (and parent folders if needed). For files, the file is downloaded from the web source to a temporary file on the computer. After it is completely downloaded, it is moved to the final location. This reduces the chance of there being a partially downloaded file on the memory card. The process performed for Zip files is a variation of what is done for files. The zip file is downloaded to a temporary location, and then it is decompressed from that temporary location to its target folder.

while (_downloadQueue.Count > 0)
{
    Phase = "Downloading...";
    var payload = _downloadQueue.Dequeue();
    DownloadProgress = 0;
    CurrentPayload = payload;
    switch (payload.PayloadType)
    {
        case PayloadType.File:
            {
                Phase="Downloading";
                var response = client.GetAsync(payload.FileURL).Result;
                var content = response.Content.ReadAsByteArrayAsync().Result;
                var tempFilePath = Path.Combine(TempFolder, payload.TargetPath);
                var fileName = Path.GetFileName(payload.FileURL);
                File.WriteAllBytes(tempFilePath, content);
                File.Move(tempFilePath, payload.TargetPath, true);
            }
            break;
        case PayloadType.Folder:
            {
                Phase = "Creating Directory";
                var directoryName = payload.TargetPath.Replace('/', Path.DirectorySeparatorChar);
                var directoryInfo = new DirectoryInfo(directoryName);
                if (!directoryInfo.Exists)
                {
                    directoryInfo.Create();
                }
            }
            break;
        case PayloadType.ZipFile:
            {
                WebClient webClient = new WebClient();
                webClient.DownloadProgressChanged += DownloadProgressChanged;
                webClient.DownloadFileCompleted += WebClient_DownloadFileCompleted;
                var tempFilePath = Path.Combine(TempFolder, Path.GetTempFileName()) + ".zip";
                var fileName = Path.GetFileName(payload.FileURL);
                var directoryName = payload.TargetPath.Replace('/', Path.DirectorySeparatorChar);

                if (String.IsNullOrEmpty(directoryName))
                {
                    directoryName = ".";
                }
                var directoryInfo = new DirectoryInfo(directoryName);
                if (!directoryInfo.Exists)
                {
                    directoryInfo.Create();
                }
                webClient.DownloadFileAsync(new Uri(payload.FileURL), tempFilePath);
                _downloadCompleteWait.WaitOne();
                Phase = "Decompressing";
                System.IO.Compression.ZipFile.ExtractToDirectory(tempFilePath, directoryInfo.FullName,true);

            }
            break;
        default:
            break;
    }
}

Showing Progress

The download process can take a while. I thought it would be important to make known that the process was progressing. The primary item of feedback shown is a progress bar. As long as it is growing in size, it’s known that data is flowing. I used the WebClient::DownloadProgressChanged event to get updates on how much of a file has been downloaded and updating the progress bar accordingly.

void DownloadProgressChanged(Object sender, DownloadProgressChangedEventArgs e)
{
    // Displays the operation identifier, and the transfer progress.
    System.Diagnostics.Debug.WriteLine("{0}    downloaded {1} of {2} bytes. {3} % complete...", 
                        (string)e.UserState, e.BytesReceived,e.TotalBytesToReceive,e.ProgressPercentage);
    DownloadProgress = e.ProgressPercentage;
}

Handling Errors

Theres a good bit of error handling that is missing from this code. I made the decision to do this because of time. Ideally, the program would ensure that it has a connection to the server with the source files. This is different than checking whether there is an Internet connection. The computer having an Internet connection doesn’t imply that it has access to the files. Nor does having access to the files imply generally having access to the Internet. Having used a lot of restricted networks, I’m of the position that just making sure there is an Internet connection too possibly not be sufficient.

It is also possible for a download to be disrupted for a variety of reasons. In addition to detecting this, implementing download resumption would minimize the impact of such occurrences.

If I come back to this application again, I might first problem each of the reasources with an HTTP HEAD requests to see whether they are available. Such a requests would also make known the sizes of the files, which could be used to implement a progress bar for the total progress. Slow downloads, though not an error condition, could be interpreted as an error. Sufficiently informing the user of what’s going on can help prevent it from being thought of as such.

The Code

If you want to grab the code for this and use it for your own purposes, you can find it on GitHub.

https://github.com/j2inet/filedownloader


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Twitter:ย @j2inet

Iterating Maps in C++

Though I feel like it has become a bit of a niche language, I enjoy coding with C++. It was one of the earliest languages I learned while in grade school. In one of the projects I’m playing with now, I need to iterate though a map. I find the ways in which this has evolved over C++ versions to be interesting and wanted to show them for comparison. I’m using Visual C++ 2022 for my IDE. It supports up to C++ 20. Though it defaults to C++ 14.

Chaning the C++ Version

To try out the code that I’m showing here, you’ll need to know how to change the C++ version for your compiler. I’ll show how to do that with Visual C++. If you are using a different compiler, you’ll need to check your references. In a C++ project, right-click on the project from the Solutions Explorer and select “Properties.” From the tree of options on the left select Configuration Properties->C/C++->Language. On the right side, the option called C++ Language Standard will let you change the version. The options there at the time that I’m writing this are C++ 14 Standard, C++ 17 Standard, and C++ 20 Standard.

Examples on How to Iterate

A traditional way that you will see for iterating involves using the an iterator object for a map. If you look in existing C++ source code, you are likely to encounter this method since it has been available for a long time and is still supported in newer C++ versions. This follows the same pattern you will see for iterating through other Standard Template collections. Though its recognizable to those that use the Standard Template Library in general, it does use pointers which have some risks associated with them. Note that I am using the C++ 11 auto keyword for the compiler to infer the type and make this code more flexible.

for (auto map_iterator = shaderMap.being(); map_iterator != shaderMap.end(); map_iterator++)
{
     auto key = map_iterator->first;
     auto value = map_iterator->second;
}

A safer method would avoid the use of pointers all together. With this next version we get an object on which we can directly read the values. I use references to the item. In optimized compilers the reference ends up being purely notational and doesn’t result in an operation. I also think this looks cleaner than the previous example.

for(auto mapItem: shaderMap)
{
     auto& key = mapItem.first;
     auto& value = mapItem.second;
}

The last version that I’ll show works in C++ 17 and above. This makes use of structured bindings. In the for-loop declaration, we can name the fields that we wish to reference and have variables for accessing them. This is the method that I prefer. It generally looks cleaner.

for (auto const& [key, blob] : shaderMap)
{

}

Why not just show the “best” version?

Best is a bit subjective, and even then, it might not be available to every project. You might have a codebase that is using some other than the most recent version of the C++ language. Even if your environment does support changing the language, I wouldn’t select arbitrarily doing so. Though the language versions generally maintain backwards compatibility, changing the language is making a sweeping change where, for a complex project, could have unknown effects. If there is a productivity reason for making the change and the time/resources are available for fully testing the application, then proceeding might be worth considering for you. But I discourage giving into temptation to use the newest version only because it is newer.

Error Explanation: Microsoft C++ exception: Poco::NotFoundException

Working on a Direct3D 11 program, something wasn’t rendering correctly. I started to examine the debug output and came across some exceptions. These exceptions had nothing to do with my rendering error, but I wanted to know what was causing them.

Exception thrown at 0x00007FFE413FCF19 in D3DAppWindow.exe: Microsoft C++ exception: Poco::NotFoundException at memory location 0x0000000B3B5E27C0.
Exception thrown at 0x00007FFE413FCF19 in D3DAppWindow.exe: Microsoft C++ exception: Poco::NotFoundException at memory location 0x0000000B3B5E2800.

I traced this error back to my call to create a D3D11Device. To debug it any further, I’d have to start debugging code outside of what I wrote. The good news is if you are seeing this exception, it’s not your fault. You are likely using a NVIDIA video adapter. The bug is coming from it. The bad news is that there’s not anything that you can do about it at this moment. It’s up to NVIDIA to fix that. It may be helpful to provide information on which NVIDIA driver and OS version that you use on this NVIDIA thread.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet
Bluesky: j2inet.bsky.social

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Shared Handles in C++ on Win32

Shared pointers are objects in C++ that manage pointers. As a pointer to an object is passed around, copied, or deleted a shared pointer keeps track of how many references there are to the object that it refers to. When all references to the object are destroyed or go out of scope, the shared pointer will delete the object and free its memory. This has the effect of smart pointers in C++ acting almost like a managed memory environment. The burden on the developer to managming emory is pleasantly diminished.

The standard template library offers, among others, the class std::shared_ptr for creating shared pointers. There are some other classes, such as std::unique_ptr with special behaviours (in this case, ensuring that only one reference to the object exists). std::shared_ptr also lets the developer specify a custom delete for the object; if there is some specific behaviour needed for when an object is being deallocated, this feature could be used to support that. These are the signatures for some of the constructors that allow custom deleters

template< class Y, class Deleter> shared_ptr( Y* ptr, Deleter d );
template< class Deleter> shared_ptr( std::nullptr_t ptr, Deleter d );
template< class Y, class Deleter, class Alloc > shared_ptr( Y* ptr, Deleter d, Alloc alloc );
template< class Deleter, class Alloc> shared_ptr( std::nullptr_t ptr, Deleter d, Alloc alloc );
template< class Y, class Deleter> shared_ptr( std::unique_ptr<Y, Deleter>&& r );

Structures like this are not limited to being used only for pointers. They can be used for other resources too. My interest was in using them to manage handles for Windows objects, specificly handles. Handles are values that identify a system resource, such as a file. Their value is not for a memory address, but is a generally opaque numeric identifier. Think of it as an ID number. When the object that a handle refers to is nolonger needed, it should be freed with a call to CloseHandle().

I was working with a program written in C/C++ for Windows and writing a function to load the contents of a file. This is the original function.

vector<unsigned char> LoadFileContents(std::wstring sourceFileName)
{
    vector<unsigned char> retVal;
    auto hFile = CreateFile(sourceFileName.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile != INVALID_HANDLE_VALUE)
    {
        DWORD fileSize = GetFileSize(hFile, NULL);

        retVal.resize(fileSize);
        DWORD bytesRead;
        HRESULT result = ReadFile(hFile, retVal.data(), fileSize, &bytesRead, FALSE);
        CloseHandle(hFile);
    }
    return retVal;
}

Well, that’s not actually the original. In the original, I forgot to make the call to CloseHandle(). Forgetting to do this could lead to resource leaks in the program or the file not being available for writing later because a read handle is still open. For my end goal, this won’t be the only file that I use, nor will files be the only type of handles. I wanted to manage these in a safer way. Here, I use the std::unique_ptr to manage handles. I’ll make a custom deleter that will close a handle.

My custom deleter is implemented as a functor. A functor is a type of object that can be used as a function. Often these are used in callback operations. Functors, unlike typical functions, can also have state. In C++ functors are generally constructed by defining the operator() for the object. operator() can take any number of arguments. For my purposes, it only needs one argument. That’s the HANDLE to be closed. A HANDLE can have two values that indicate it isn’t referencing a value object. There is a constant, INVALID_HANDLE_VALUE (whose literal value is -1) and 0. To ensure CloseHandle() isn’t called on an invalid value, I need to check that the value passed is not either of these values and only call CloseHandle() if neither of these values was passed.

struct HANDLECloser
{
	void operator()(HANDLE handle) const
	{
		if (handle != INVALID_HANDLE_VALUE && handle != 0)
		{
			CloseHandle(handle);
		}
	}
};

Since there will only ever be one object accessing my file handles, I’ll be using std::unique_ptr for my file handles. With the above declaration I could begin using std::unique_ptr objects immediately.

auto myFileHandle = std::unique_ptr<void, HANDLECloser>(hFile);

That’s a lot to type though. In the interest of brevity, let’s make a declaration so that we can invoke that with less keystrokes.

using HANDLE_unique_ptr = std::unique_ptr<void, HANDLECloser);

With that in place, the previous call to initialize a unique pointer could be shortened to the following.

auto myFileHandle = HANDLE_unique_ptr(hFile);

That’s a bit more concise. Let’s add one more thing. Generally, I would be using this with the Win32 CreateFile function. Let’s make a CreateFileHandle() function that takes the same parameters as CreateFile but returns our std::unique_ptr for our file handle.

HANDLE_unique_ptr CreateFileHandle(std::wstring fileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile)
{
	HANDLE handle = CreateFile(fileName.c_str(), dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile);
	if (handle == INVALID_HANDLE_VALUE || handle == nullptr)
	{
		return nullptr;
	}
	return HANDLE_unique_ptr(handle);
}

Using these new classes that I’ve put in place,

vector<unsigned char> LoadFileContents(std::wstring sourceFileName)
{
    vector<unsigned char> retVal;
    auto hFile = CreateFileHandle(sourceFileName.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile)
    {
        DWORD fileSize = GetFileSize(hFile.get(), NULL);
        retVal.resize(fileSize);
        DWORD bytesRead;
        HRESULT result = ReadFile(hFile.get(), retVal.data(), fileSize, &bytesRead, FALSE);    
    }
    return retVal;
}

There are some other good bits of code in the project from which I took this code that I plan to share in the common weeks. Some parts are simple but useful, other parts are more complex. Come back in a couple of weeks for the next bit that I have to share.


Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Twitter:ย @j2inet
Bluesky: j2inet.bsky.social

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Recompiling the V8 JavaScript Engine on Windows

Note Added 2025 March 10 – These instructions no longer work. Google has dropped support for using MSVC. It is still possible to build on Windows using Clang. But this presents new challenges, such as linking CLang binaries to MSVC binaries. More information on this change can be found in a Google Group discussion here.

Note Added 2024 September 3 – I tried to follow my own instructions on a whim today and found that some parts of the instructions don’t work. I made my way through them with adjustments to get to success.

I decided to compile the Google V8 JavaScript engine. Why? So that I could include it in another program. Google doesn’t distribute the binaries for V8, but they do make the source code available. Compiling it is, in my opinion, a bit complex. This isn’t a criticism. There are a lot of options for how V8 can be built. Rather than making available the permutations of these options for each version of V8, one could just set options themselves and build it for their platform of interest.

But Isn’t There Already Documentation on How to Do This?

There does exists documentation from Google on compiling Chrome. But there are variations from those instructions and what must actually be done. I found myself searching the Internet for a number of other issues that I encountered and made notes on what I had to do to get around compilation problems. The documentation comes close to what’s needed, but isn’t without error and deviation.

Setting Up Your Environment

Before touching the v8 source code, ensure that you have installed Microsoft Visual Studio. I am using Microsoft Visual Studio 2022 Community Edition. There are some additional components that must be installed. In an attempt to make this setup process as scriptable as possible, I’ve have a batch file that will have the Visual Studio Installer add the necessary components. If a component is already installed, no action is taken. Though the Google V8 instructions also offer a command to type to accomplish the same thing, this is where I encountered my first variation from their instructions. Their instructions assume that the name of the Visual Studio Installer command to be setup.exe (it probably was on a previous version of Visual Studio) where my installer is named vs_installer.exe. There were also additional parameters that I had to pass, possibly because I have more than one version of Visual Studio installed (Community Edition 2022, Preview Community Edition 2022, and a 2019 version).

pushd C:\Program Files (x86)\Microsoft Visual Studio\Installer\

vs_installer.exe install --productid Microsoft.VisualStudio.Product.Community --ChannelId VisualStudio.17.Release --add Microsoft.VisualStudio.Workload.NativeDesktop  --add Microsoft.VisualStudio.Component.VC.ATLMFC  --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.VC.MFC.ARM64 --add Microsoft.VisualStudio.Component.Windows10SDK.20348 --includeRecommended

popd

You may need to make adjustments if your installer is located in a different path.

While those components are installing, let’s get the code downloaded and put int place. I did the download and unpacking from powershell. All of the commands that follow were stored in a power shell script. Scripting the process makes it more repeatable and is easier to document (since the scripts are also a record of what was done). You do not have to use the same file paths that I do. But if you change them, you will need to make adjustments to the instructions when one of these paths is used.

I generally avoid placing folders directly in the root. The one exception to that being a folder I make called c:\shares. There’s a structure that I conform to when placing this folder on Windows machines. For this structure, Google’s code will be placed in subdirectories of c:\shares\projects\google. In the following script you’ll see that path used.

$depot_tools_source = "https://storage.googleapis.com/chrome-infra/depot_tools.zip"
$depot_tools_download_folder= "C:\shares\projects\google\temp\"
$depot_tools_download_path = $depot_tools_download_folder + "depot_tools.zip"
$depot_tools_path = "c:\shares\projects\google\depot_tools\"
$chromium_checkout_path = "c:\shares\projects\google\chromium"
$v8_checkout_path = "c:\shares\projects\google\"

mkdir $depot_tools_download_folder
mkdir $depot_tools_path
mkdir $chromium_checkout_path
mkdir $v8_checkout_path

pushd "C:\Program Files (x86)\Microsoft Visual Studio\Installer\"
.\vs_installer.exe install --productID Microsoft.VisualStudio.Product.Community --ChannelId VisualStudio.17.Release --add Microsoft.VisualStudio.Workload.NativeDesktop  --add Microsoft.VisualStudio.Component.VC.ATLMFC  --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.VC.MFC.ARM64 --add Microsoft.VisualStudio.Component.Windows10SDK.20348 --includeRecommended
popd

Invoke-WebRequest -Uri $depot_tools_source -OutFile $depot_tools_download_path
Expand-Archive -LiteralPath $depot_tools_download_path -DestinationPath $depot_tools_path

After this script completes running, Visual Studio should have the necessary components and the V8/Chrome development tools are downloaded and in place.

There are some environment variables on which the build process is dependent. These variables could be set within batch files, could be set to be part of the environment for an instance of the command terminal, or set at the system level. I chose to set them at the system level. This was not my first approach. I set them at more local levels initially. But several times when I needed to open a new command terminal, I forgot to apply them, and just found it easier to set them globally.

ENVIRONMENT VARIABLEVALUE
DEPOT_TOOLS_WIN_TOOLCHAIN0
vs2022_installC:\Program Files\Microsoft Visual Studio\2022\Community
PATHc:\shares\projects\google\depot_tools\;%PATH%
Environment Variables that must be set

From here on, we will be using the command prompt, and not PowerShell. This is because some of the commands that are part of Google’s tools are batch files that only run properly in the command prompt.

From the command terminal, run the command gclient. This will initialize the Google Tools. Next, navigate to the folder in which you want the v8 code to download. For me, this will be c:\shares\projects\google. The download process will automatically make a subfolder named v8. Run the following command.

fetch --nohistory v8

This command can take a while to complete. After it completes you will have a new directory named v8 that contains the source code. Navigate to that directory.

cd v8

The online documentation that I see from Google for v8 is for version 9. I wanted to compiled version 12.0.174.

git checkout 12.0.174

Update 2025 March 7

Reviewing the instructions now, I find that the above command fails. It may be necessary to fetch the labels for the versions with the following commands to get version 13.6.9.

git fetch --tags
git checkout 13.6.9

Today I am trying to only rebuild v8 for Windows. Eventually I’ll rebuild it for ARM64 also. Run the following commands. It will make the build directories and configurations for different targets.

python3 .\tools\dev\v8gen.py x64.release
python3 .\tools\dev\v8gen.py x64.debug
python3 .\tools\dev\v8gen.py arm64.release
python3 .\tools\dev\v8gen.py arm64.debug

The build arguments for each environment are in a file named args.gn. Let’s update the configuration for the x64 debug build. To open the build configuration, type the following.

notepad out.gn\x64.debug\args.gn

This will open the configuration in notepad. Replace the contents with the following.

is_debug = true
target_cpu = "x64"
v8_enable_backtrace = true
v8_enable_slow_dchecks = true
v8_optimized_debug = false
v8_monolithic = true
v8_use_external_startup_data = false
is_component_build = false
is_clang = false

Chances are the only difference between the above and the initial version of the file are from the line v8_monolithic onwards. Save the file. You are ready to start your build. To kick off the build, use the following command.

ninja -C out.gn\x64.debug v8_monolith

Update 2024 September 3 – Compiling this now, I’m encountering a different error. It appears the compilier I’m using takes issues with some of the nested #if directives in the source code. There was in in src/execution/frames.h around line 1274 that was problematic. It involved a line concerning enabling V8 Drumbrake. Nope, I don’t know what that is. This was for a call to DCHECK, which is not used in production builds. I just removed it. I encountered similar errors in src/diagnostics/objects-debug.cc, src\wasm\wasm-objects.cc,

This will also take a while to run, but this will fail. There is a third party component that will fail concerning a line in a file named fmtable.cpp. You’ll have to alter a function to fix the problem. Open the file in the path .\v8\third_party\icu\source\i18n\fmtable.cpp. Around line 59, you will find the following code.

static inline UBool objectEquals(const UObject* a, const UObject* b) {
     // LATER: return *a == *b
     return *((const Measure*)a) == ((const Measure*)b);
}

You’ll need to change it so that it contains the following.

static inline UBool objectEquals(const UObject* a, const UObject* b) {
     // LATER: return *a == *b
     return *((const Measure*)a) == *b;
}

Save the file, and run the build command again. While that’s running, go find something else to do. Have a meal, fly a kite, read a book. You’ve got time. When you return, the build should have been successful.

Hello World

Now, let’s make a hellow world program. Google already has a v8 hellow would example that we can use to see that our build was successful. We will use it for now, as I’ve not discussed anything about the v8 object library yet. Open Microsoft Visual Studio and create a new C++ Console application. Replace te code in the cpp file that it provides with Google’s code.

// Copyright 2015 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include "libplatform/libplatform.h"
#include "v8-context.h"
#include "v8-initialization.h"
#include "v8-isolate.h"
#include "v8-local-handle.h"
#include "v8-primitive.h"
#include "v8-script.h"

int main(int argc, char* argv[]) {
    // Initialize V8.
    v8::V8::InitializeICUDefaultLocation(argv[0]);
    v8::V8::InitializeExternalStartupData(argv[0]);
    std::unique_ptr<v8::Platform> platform = v8::platform::NewDefaultPlatform();
    v8::V8::InitializePlatform(platform.get());
    v8::V8::Initialize();

    // Create a new Isolate and make it the current one.
    v8::Isolate::CreateParams create_params;
    create_params.array_buffer_allocator =
        v8::ArrayBuffer::Allocator::NewDefaultAllocator();
    v8::Isolate* isolate = v8::Isolate::New(create_params);
    {
        v8::Isolate::Scope isolate_scope(isolate);

        // Create a stack-allocated handle scope.
        v8::HandleScope handle_scope(isolate);

        // Create a new context.
        v8::Local<v8::Context> context = v8::Context::New(isolate);

        // Enter the context for compiling and running the hello world script.
        v8::Context::Scope context_scope(context);

        {
            // Create a string containing the JavaScript source code.
            v8::Local<v8::String> source =
                v8::String::NewFromUtf8Literal(isolate, "'Hello' + ', World!'");

            // Compile the source code.
            v8::Local<v8::Script> script =
                v8::Script::Compile(context, source).ToLocalChecked();

            // Run the script to get the result.
            v8::Local<v8::Value> result = script->Run(context).ToLocalChecked();

            // Convert the result to an UTF8 string and print it.
            v8::String::Utf8Value utf8(isolate, result);
            printf("%s\n", *utf8);
        }

        {
            // Use the JavaScript API to generate a WebAssembly module.
            //
            // |bytes| contains the binary format for the following module:
            //
            //     (func (export "add") (param i32 i32) (result i32)
            //       get_local 0
            //       get_local 1
            //       i32.add)
            //
            const char csource[] = R"(
        let bytes = new Uint8Array([
          0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01,
          0x60, 0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x07,
          0x07, 0x01, 0x03, 0x61, 0x64, 0x64, 0x00, 0x00, 0x0a, 0x09, 0x01,
          0x07, 0x00, 0x20, 0x00, 0x20, 0x01, 0x6a, 0x0b
        ]);
        let module = new WebAssembly.Module(bytes);
        let instance = new WebAssembly.Instance(module);
        instance.exports.add(3, 4);
      )";

            // Create a string containing the JavaScript source code.
            v8::Local<v8::String> source =
                v8::String::NewFromUtf8Literal(isolate, csource);

            // Compile the source code.
            v8::Local<v8::Script> script =
                v8::Script::Compile(context, source).ToLocalChecked();

            // Run the script to get the result.
            v8::Local<v8::Value> result = script->Run(context).ToLocalChecked();

            // Convert the result to a uint32 and print it.
            uint32_t number = result->Uint32Value(context).ToChecked();
            printf("3 + 4 = %u\n", number);
        }
    }

    // Dispose the isolate and tear down V8.
    isolate->Dispose();
    v8::V8::Dispose();
    v8::V8::DisposePlatform();
    delete create_params.array_buffer_allocator;
    return 0;
}

If you try to build this now, it will fail. You need to do some configuration. Here is a quick list of the configuration changes. If you don’t understand what to do with these, that’s find. I’ll will walk you through applying them.

VC++ Directories : 
	Include : v8\include
	Library Directories<Debug>: v8\out.gn\x64.debug\obj
	Library Directories<Release>: v8\out.gn\x64.release\obj

C/C++
	Code Generation
		Runtime Library <Debug>: /MTd
		Runtime Library <Release> /Mt
	Preprocessors
		V8_ENABLE_SANDBOX;V8_COMPRESS_POINTERS;_ITERATOR_DEBUG_LEVEL=0;
		
Linker
	Input
		Additional Dependencies: v8_monolith.lib;dbghelp.lib;Winmm.lib;

Right-click on the project file and select “Properties.” From the pane on the left, select VC++ Directories. In the drop-down on the top, select All Configurations. On the right there is a field named Include. Select it, and add the full path to your v8\include directory. For me, this will be c:\shares\projects\google\v8\include. If you build in a different path, it will be different for you. After adding the value, select Apply. You will generally want to press Apply after each field that you’ve changed.

Change the Configuration drop-down at the top to Debug. In the Library Directories entry, add the full path to your v8\out.gn\x64.debug\obj folder and click Apple. Change the Configuration dropdown to Release and in Library Directories add the full path to your v8\out\gn\x64.release\obj folder.

From the pane on the left, expand C/C++ and select Code Generation. On the right, set the Debug value for Runtime Library to /MTd and set the Release value for the field to /Mt.

Change the Configurations option to All and set add the following values to Preprocessors

V8_ENABLE_SANDBOX;V8_COMPRESS_POINTERS;_ITERATOR_DEBUG_LEVEL=0;

Keep the Configurations option on ALL. Expand Linker and select Input. For Additional Dependencies enter v8_monolith.lib;dbghelp.lib;Winmm.lib;

With that entered, press Okay. You should now be able to run the program. It will pass some values to the JavaScript engine to execute and print out the values.

What’s Next

My next set of objectives is to demonstrate how to project a C++ object into JavaScript. I also want to start thinning out the size of these files. On a machine that is using the v8 binaries, the entire build tools are not needed. At the end of the above process the b8 folder has 12 gigs of files. If you copy out only the build files and headers needed for other projects, the file size is reduced to 3 gigs. Further reductions could occur through changing some of the compilation options.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Making a Web Crawler using the Android Web Client

Source Code

Like many others my coworkers and I have been called back to work in the office for part of the week. Returning to the office hasn’t been without its challenges, especially since the environment has substantially changed. At the end of one week I was asked to collect some information on ads served to the browser in certain countries. To gather this information, I used a VPN to browse from a different country and I created a web-crawler using JavaScript and Node. It created a browser instance, followed links starting from a specific set of pages, and kept track of resources that the pages loaded, and download content that was accessed from certain domains. The app worked fine and it collected the information that I needed. On Monday, when I was in the office, I was asked to produce a similar dataset as seen from a different country. I started my software to do this tasks only to find that the network now actively blocks VPN connections.

I thought about driving back home to complete the task, but decided to just make a new web crawler to run from my Android tablet. That’s what I did. I made an app with a WebView and had it to load each one of ty starting pages. For each page that loaded, there were two sets of data that I needed to capture; the resources that the page requested, and the links that were in the page. To retrieve this information, I would need a WebViewClient for the WebView. The WebViewClient is an object with a number of methods that get called that let one intercept or get notifications of what the WebView is doing. I was only concerned with a few methods on this object.

  • onPageFinished – Fires once a page has finished loading
  • onLoadResource – Fires when a page is requesting a resource, such as an image

When a page finishes loading, I grab the links. There is not API specifically for querying the page’s DOM. There is, however, a method on the WebView to execute JavaScript and return the results as a string object. I inject a small function into the page that grabs the links and extract them from the JSON array of strings that comes back. This is the JavaScript.

function extractLinks(){
     var list = Array.from(document.getElementsByTagName('a')); 
     for(var i=0;i<list.length;++i) { 
           list[i] = list[i].href;
     }
     return list;
})()

To execute the JavaScript in the webview, I use the WebView’s evaluateJavascript() method. The method accepts a ValueCallback object. The value is a string of the JSON encoding of the information. I convert that to a String array and save the links. The two references to the dataHandler object are from a class that I defined. The two methods of interest here are LinksExtracted(String[]) and PageLoadComplete(). The LinksExtracted method receives all of the URLs of the links in the page. The dataHandler is responsible for saving those. PageLoadComplete is used to create demarcaction in the data between the pages. Note that this method of capturing links isn’t perfect; it is possible that after a page loads, the page could dynamically adjust the HTML to remove some links and add others. For my application, the result of this apparent oversight is fine.

    override fun onPageFinished(view: WebView?, url: String?) {
        super.onPageFinished(view, url)

        view!!.evaluateJavascript("(function extractLinks(){var list = Array.from(document.getElementsByTagName('a')); for(var i=0;i<list.length;++i) { list[i] = list[i].href};; return list;})()",
            object:ValueCallback<String> {
                override fun onReceiveValue(value: String) {
                    if(value != null && value != "null")
                    {
                        val gson = GsonBuilder().create()
                        val theList = gson.fromJson<ArrayList<String>>(value, object :
                            TypeToken<ArrayList<String>>(){}.type)
                        if(theList != null) {
                            dataHandler.LinksExtracted(theList.toTypedArray());
                        }
                    }
                    dataHandler.PageLoadComplete()
                }
            }
            )
    }

The links are persisted to an SqLite database. To do this, I’ve defined a data class for holding a row of data.

package net.j2i.webcrawler.data
import kotlinx.serialization.Serializable

@Serializable
data class UrlReading(val sessionID:Long=0L, val pageRequestID:Long = 0L, val url:String = "", val timestamp:Long = -1L) {
}

The sessionID will be the same for all values captured during the same run of the program. pageRequestID increments every time a new page loads. urlString contains the information of interest, the URL. And timestamp contains the time at which the URL was captured.

Creation of the database and insertion of data into it fairly plain-vanilla code. I won’t post the code here, but if you would like to see it, it’s on GitHub and can be found through this link: https://github.com/j2inet/sample-webcrawler/blob/main/app/src/main/java/net/j2i/webcrawler/data/UrlReadingDataHelper.kt

When the data is to be extracted, the program will write it to a CSV file with headers. To minimize the memory demand for this, I have a method on the data helper that will write the data as a cursor is reading it.

    fun writeAllRecords(os:OutputStreamWriter):List<UrlReading>  {

        os.write("SessionID, PageRequestID, Timestamp, URL\r\n")

        val readings = mutableListOf<UrlReading>()
        val db = writableDatabase
        val projection = arrayOf(
            BaseColumns._ID,
            UrlReadingsContract.COLUMN_NAME_SESSION_ID,
            UrlReadingsContract.COLUMN_NAME_PAGE_REQUEST_ID,
            UrlReadingsContract.COLUMN_NAME_URL,
            UrlReadingsContract.COLUMN_NAME_TIMESTAMP
        )
        val sortOrder = "${UrlReadingsContract.COLUMN_NAME_TIMESTAMP} ASC"
        val cursor = db.query(
            UrlReadingsContract.TABLE_NAME,
            projection,
            null,
            null,
            null,
            null,
            sortOrder
        )
                    with(cursor) {
            while (moveToNext()) {
                val reading = UrlReading(
                    //source = getString(getColumnIndexOrThrow(BaseColumns._ID)),
                    sessionID = getLong(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_SESSION_ID)),
                    pageRequestID = getLong(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_PAGE_REQUEST_ID)),
                    url = getString(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_URL)),
                    timestamp = getLong(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_URL)),
                );

                val line = "${reading.sessionID}; ${reading.pageRequestID}; ${reading.timestamp}, ${reading.url}\r\n";
                os.write(line);
                readings.add(reading)
            }
        }
        return readings
    }

The program keeps track of the URLs that it has found links for and ads them to a list. When going to the next page, it randomly selects from this list (and removes the item selected). However, the program will first visit all of the initial set of URLs that it was given before randomly selecting. If I don’t do this, then the links found on the first page loaded might result in the other initial set of pages not being visited or not having a chance of having as much of an impact in the pages visited. Those initial URLs are added to the list and a count of the URLs is saved.

        UrlList.add("https://msn.com")
        UrlList.add("https://yahoo.com");
        linearLoadCount = UrlList.count()

The method for loading random URLs initially dequeues URLs from the beginning of the list. After all of the intial URLs have been read, random reads occur.

    fun openRandomSite() {
        var index = 0;
        if(linearLoadCount>0) {
            --linearLoadCount
            var index = random.nextInt(UrlList.count())
        }
        val nextUrl = UrlList[index];
        UrlList.removeAt(index);
        mainWebView!!.loadUrl(nextUrl)
    }

To keep the pages cycling, in the PageLoadComplete()handler the next call to load a random page is queue (with a delay).

            override fun PageLoadComplete() {
                ++pageSessionID;
                mainHandler.postDelayed(object:Runnable {
                    override fun run() {
                        openRandomSite()
                    }
                },NAVIGATE_DELAY)
            }

It took less time to write this than it would have to drive home. The initial set of URLs in the code are in the source code. This was written to only be used once, so I skipped practices that would have made the program of more general utility. Nevertheless, I think it might be useful to someone. You can find the complete source code on GitHub.

https://github.com/j2inet/sample-webcrawler


Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Twitter:ย @j2inet

USA Testing Emergency Alert System on 4 October 2023 around 2:20 pm

On 3 October 2023 around 2:20PM, the USA is testing its emergency alert system. The test will be broadcast over radio (including TV) and mobile phone. Expect phones to be blaring around you around this time. Don’t worry, this is only a test.

If you are likely to be in a situation where you cannot afford or tolerate your phone going off, then you might want to keep your phone powered off around this time. Some environments, such as courthouses, have rules on phones being in silent mode or turned off (I believe a phone going off in court in Atlanta can get someone in trouble for contempt of court). Even if you’ve muted all your settings on your phone, this alert might not respect those settings. While some phones expose settings to silence other alerts, the national alert system’s setting has been unalterable on the phones that I’ve examined over the years.

When the test goes off, don’t be alarmed. If you have one of those emergency tests radios, it might be a good opportunity to see how well it works.

Updating your Profiles in Cisco VPN Connect (MacOS)

Some years ago I worked with a client and had to install Cisco VPN Connect on my Mac. After the work was done, I uninstalled the client. Recently, I found myself needing the VPN with a different client. On reinstalling the software, all of the old settings from the previous client were still there and the VPN software refused to save the new connection URL. To get the client to work the way I needed, I had to update the profile manually.

One of the places where the Cisco Anytime Connect software saves information is /opt/cisco/anyconnect/profile. Navigating to that path in Terminal you will find a couple of files. The one of interest is Anyconnect-SAML.xml. This is an XML file that contains the connection settings. In addition to this file the software also remembers the last connection that it attempted to connect to. I don’t know where that information is stored, but that information won’t be needed for this change. The simplest way to address the connection problem is to rename this file. I say “rename” and not “delete” so that the information is available should you need it. Renaming has the same effect as deleting, but allows you to rollback. I changed the file to a name that had .backup on the end.

With the file effectively deleted, if you restart the Cisco Anytime VPN software, it will still show the last server that you connected to. Enter your new VPN URL and connect. After successfully connecting, the software will remember this URL and make it available the next time that you need to connect.

Setting a DLL Path at Runtime for P/Invoke

.Net applications can call functions from static DLLs using the [DllImport] attribute. This attribute has as its argument the name of the DLL in which the target is store. But what does one do if the location of the DLL is not in the paths that the system will search? First, let’s consider where the system looks for DLLs in the order that it searches for them.

  1. The Application Directory
  2. The System Directory
  3. The Windows Directory
  4. Current Directory
  5. Directorys in the PATH environment variable

If the target DLL isn’t in one of those folders, it won’t be found. There is a Win32 function that let’s an application set an additional folder in which the system will look for resolving a DLL location at runtime. The function has the signature HRESULT SetDllDirectory(LPWSTR pathname). When this method is called with a valid path the new search path is as follows.

  1. The Application Directory
  2. The Directory passed in SetDllDirectory()
  3. The System Directory
  4. The Windows Directory
  5. The Current Directory
  6. Directories in the PATH environment variable

The statement for adding a declaration for SetDllDirectory follows.

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool SetDllDirectory(string lpPathName);

Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Twitter:ย @j2inet

Customizing the Logitech/Saitek Flight Instrument Panel

Saitek (which was later acquired my Logitech) created flight instrument hardware that is primarily associated with Microsoft Flight Simulator. While there are various device types that they make, the one in which I had the most interest is the “Flight Instrument Panel.” It is a small LCD display that connects to the computer via a USB connector. It doesn’t appear that Logitech has made any changes to the hardware since it’s release; the device still uses a mini-USB connector.

I have some purposes for it beyond using it for Microsoft Flight Simulator. I wanted to perform some customization on the pane. After going through the setup, the panel begin to display information. By default it displays promotional information for other hardware until an application tells it to display something else. I’m not fond of advertisements on my idle devices and wanted to change these first. Thankfully this can be done without any programming. The default displays images are from jpg files that can be found in the file system after the device is setup. Navigate to C:\Program Files\Logitech\DirectOutput to see the files. Replace any one of them to alter what the screen displays.

Before purchasing a panel I searched for an SDK for it. I didn’t find an SDK, but I found that plenty of other people had software projects for it and figured I would be able to make it work. Only after getting the device setup did I find that the SDK was closer than I realized. Documentation for controlling the panel installs along side the panel. The group of APIs in the SDK are referred to as DirectOutput. No, that’s not one of Microsoft’s DirectX APIs (Like Direct3D, DirectInput, so on). That’s just the name Saitek selected for their SDK.

  1. The Application Directory
  2. The System Directory
  3. The Windows Directory
  4. Current Directory
  5. Directorys in the PATH environment variable

If the target DLL isn’t in one of those folders, it won’t be found. There is a Win32 function that let’s an application set an additional folder in which the system will look for resolving a DLL location at runtime. The function has the signature HRESULT SetDllDirectory(LPWSTR pathname). When this method is called with a valid path the new search path is as follows.

  1. The Application Directory
  2. The Directory passed in SetDllDirectory()
  3. The System Directory
  4. The Windows Directory
  5. The Current Directory
  6. Directories in the PATH environment variable

The statement for adding a declaration for SetDllDirectory follows.

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool SetDllDirectory(string lpPathName);

Mastodon:ย @j2inet@masto.ai
Instagram:ย @j2inet
Facebook:ย @j2inet
YouTube:ย @j2inet
Telegram:ย j2inet
Twitter:ย @j2inet

Erasing an EPROM with Alternative Devices

I’ve come into possession of an EPROM and got a programmer for it. Writing data to it was easy. Erasing data is another matter. Note that I said EPROM and not EEPROM. What’s the difference? An The first E in EEPROM means “Electrically.” And Electrically Erasable Read Only Memory can be cleared by using some electric circuit. The EPROM I have must be erased through UV light. There is a window on the ceramic package that exposes the silicon underneath. With enough UV light through this window, this chip should be erased.

There are devices sold to specifically erase such memory. I’m not using those. Instead, I have a number of other UV sources to test with. These are

  • The Sun
  • A portable UV phone Cleaner
  • A Clamshell UV Phone Cleaner
  • A Tube Blacklight

I’m using a M27C256 32k EPROM. To know whether my attempt at erasing worked or not I needed to first put something on it. I filled the memory with binary digits counting from 0 to 255, repeating the sequence when I reached the end. The entire 32K was filled with this pattern. To produce a file with the pattern I wrote a few lines of code.

// See https://aka.ms/new-console-template for more information
byte[] buffer = new byte[0x7FFF];
for(int i = 0;i <buffer.Length;i++)
{
    buffer[i] = (byte)i;
}
using (FileStream fs = new FileStream("content.bin", FileMode.Create, FileAccess.Write))
{
    fs.Write(buffer, 0, buffer. Length);
}

Now to get the resultant file copied to the EPROM. The easiest way to do that is with a dedicated EPROM programmer. They are relatively cheap, easy to find, and versatile. I found one on Amazon that worked well for me. Using it was only a matter of selecting what type of EPROM I was using, selecting a file containing the content to be written, and selectin the program button.

The software for writing information to the EPROMs

Reading from the EPROM is just as simple. After the EPROM is connected to the programmer and the EPROM model is selected in the software, it provides a READ button that copies all the bytes from the memory device and displays them in the hex editor. To determine whether the EPROM had been erased I will use this functionality. Now that I have a way to read and write from the EPROM, let’s test the different means of erasure.

Using the Sun

These results were the most disappointing. After having an EPROM out for most of the day, the ROM was not erased. Speaking to someone else, I was told that it would take several days of exposure to erase the EPROM. I chose not to leave the EPROM out for this long, as I’d risk forgetting it was out there when the weather becomes more wet.

Using a Portable UV Sanitizer

The portable UV Sanitizer that I tried was received as a Christmas gift at the end of 2022. Such devices are widely available now in the wake of COVID. This unit charges with a USB cable and runs off of a battery. When turned on, it stays on until it is either turned off, the battery goes dead, or someone turns it over. This unit will only emit light when the light is facing downward. I speculate this is a safety feature; you won’t want to look directly into the EV light.

My first attempts to erase one of the EPROMs with this sanitizer were not successful. After several sessions, the EPROMs still had their data on them. While I wouldn’t look directly into the UV light I could point my camera at it safely. The picture was informative. The light had a brighter level on the end that was closer to the power source, and was very dim at the end. Before, I was only ensuring the window of the EPROM were under some portion of the lighting tube. Now, I knew to ensure it was close to the brighter end of the UV emitter. Using the new placement, I was able to erase an EPROM in about 60 minutes.

UV Sanitizer with the EPROM at the brighter end.

Provided that someone is only erasing a single EPROM and isn’t in a hurry, I think that this could make for an adequate solution for erasing an EPROM. If there’s more than one though his might not work as well, especially when one considers the time needed to recharge the battery after it has been diminished by an erasing session.

Clamshell UV Phone Cleaner

I received this clamshell UV phone cleaner as a gift nearly a decade ago. This specific model isn’t sold any more, but newer variations are available under the description PhoneSoap. These have a few advantages over the portable UV sanitizer. It runs from a 12 volt power source. There’s no waiting for it to recharge before you can use it. It also appears to be a lot brighter. The UV emitter automatically deactivates when the case is being opened, but there is a brief moment where the case is just being opened but the light hasn’t turned off yet in which some of the light spills out of the unit. It is either a lot brighter, or it has more light in the visible spectrum. The unit I use has emitters on both the hinged and the lower area of the case. EPROMs placed in it could be oriented face-up or face-down and still be erased. When this case is closed, the emitter turns on for 300 seconds and then turns off. I’d like for it to be longer for my purposes, but 300 seconds isn’t bad. After I let an EPROM sit for one 5-minute session in the sanitizer, it still has data on it. But after a second 5-minute session it showed as erased. I think this unit is worthy of consideration.

Tube UV Light

I have an old UV tube light that I purchased in my teens. I dug it up and found a power supply for it. The light still works, but after leaving an EPROM in direct contact with it for well over 24 hours I found no change. I speculated that this would be the outcome for a few reasons. Among which is that UV lights of this type are commonly where people can see them. The cleaning UV lights have warnings to keep them away from skin and eyes. From the glimpse that I got of them through the phone’s camera, it looks that they are working in a different wavelength. Not that this is a true measure of the true bandwidth. But there’s not much to be said about the tube light.

The Winner

The clear winner here is the clamshell UV light. It was easy to use and was able to erase the EPROM in ten minutes. The portable UV cleaner comes in second. The other sources didn’t cross the finish line given a generous amount of time to do so. It might be possible to eventually erase an EPROM with them, but I don’t think it is worth the time.

Now that I have a reliable way to erase these EPROMs, I can use these in the MC6800 Computer that I was working on.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Jameco Valuepro BB-4T7D 3220-Point Solderless Breadboard

The File System Watcher::Reloading Content Automatically

I was performing enhancements on a video player that read its content at startup, and then would serve from that content on demand. The content, though loaded from the file system, was being put in place by another process. Since the primary application only scanned for content at startup, it would not detect the new content until after the scheduled daily reboot.

The application needed a different behaviour. It needed to detect when content was updated and rescan content accordingly. There are several ways that this could be done, with the application occasionally scanning its content being among the most obvious solutions. There are better solutions. The one I am sharing here is the File System Watcher. I’ll be looking at using the implementations for NodeJS and .NET.

The File System Watchers keeps track of files in specific paths and notifies an application when a change of interest occurs. Once could watch an entire folder or watch only specific files. If any files change, the application receives a notification.

Let’s consider how this feature is used in NodeJS first. You’ll need to import the file system object. The file system object has a function named watch that accepts a file path. The object that is returned is used to received notifications when an item within that path is created or updated.

const fs = require('fs')
const readline = require('readline');
const path = require('path');

var watcher;
let watchPath = path.join(__dirname, 'config');
console.log(`Watch path: ${watchPath}`);
watcher = fs.watch(watchPath)
watcher.on('change', (event, filename)=> {
	console.log(event);
	console.log(filename);
});

console.log('asset watcher activated');

When a configuration file is change, how that is handled depends on the logic of how your application works.

In the .Net environment there’s a class named FileSystemWatcher that accepts a directory name and a file filter. The file filter is the pattern for the file names that you won’t considered. Use *.* to monitor for any file. You can also filter for notifications of the file attributes changing. Instances of FileSystemWatcher exposes several events for different types of file system events.

  • Renamed
  • Deleted
  • Changed
  • Created

When an event occurs, the application receives a FileSystemEventArgs object. It provides three properties about the change that has occurred.

  • ChangeType – Type of event that occurred
  • FullPath – The full path to the file system object affected
  • Name – the name of the file system object affected

These should tell you most of the information that you need to know the nature of the change.

Whether in NodeJS or .Net, using the file system watcher provides a simple and efficient method for detecting when vital files have been updated. If you decide to add features to your application to ensure it is responsive to changes in files, you’ll want to use it in your solutions.

Find the source code for sample apps here

https://github.com/j2inet/FileSystemWatcherDemo

.Net Sample App

The .Net Sample App monitors the executable directory for the files content.txt and title.txt. The application has a title area and a content area. If the contents of the files are changed, the application UI updates accordingly. I made this a WPF app because the binding features makes it especially easy to present the value of a variable with minimal custom code. I did make use of some custom base classes to keep the app-specific code simple.

using System;
using System.Collections.Generic;
using System.DirectoryServices;
using System.IO;
using System.Linq;
using System.Security.Policy;
using System.Text;
using System.Threading.Tasks;

namespace FileSystemWatcherSample.ViewModels
{
    public  class MainViewModel: ViewModelBase
    {
        public MainViewModel() {
            var assemblyFile = new FileInfo(this.GetType().Assembly.Modules.FirstOrDefault().FullyQualifiedName);
            var parentDirectory = assemblyFile.Directory;
            Directory.SetCurrentDirectory(parentDirectory.FullName);

            FileSystemWatcher fsw = new FileSystemWatcher(parentDirectory.FullName);
            fsw.Filter = "*.txt";
            fsw.Created += FswCreatedOrChanged;
            fsw.Changed += FswCreatedOrChanged;
            fsw.NotifyFilter = NotifyFilters.CreationTime | NotifyFilters.LastWrite | NotifyFilters.FileName;
            fsw.EnableRaisingEvents = true;
        }


        void FswCreatedOrChanged(object sender, FileSystemEventArgs e)
        {
            var name = e.Name.ToLower();
            switch (name)
            {
                case "contents.txt":
                    try
                    {
                        Content = File.ReadAllText(e.FullPath);
                    }catch(IOException exc)
                    {
                        Content = "<unreadable>";
                    }
                    break;
                case "title.txt":
                    try
                    {
                        Title = File.ReadAllText(e.FullPath);
                    } catch(IOException exc)
                    {
                        Title = "<unreadable>";
                    }
                    break;
                default:
                    break;
            }
        }

        private string _title = "<<empty>>";
        public string Title
        {
            get => _title;
            set => SetValueIfChanged(() => Title, () => _title, value);
        }

        string _content = "<<empty>>";
        public string Content
        {
            get => _content;
            set => SetValueIfChanged(()=>Content, ()=>_content, value);
        }
    }
}

Node Sample App

The Node Sample App runs from the console. In operation, it is much more simple than the .Net application. When a file is updated it prints a notification to the screen.

const fs = require('fs')
const readline = require('readline');
const path = require('path');


function promptUser(query) {
    const rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout,
    });

    return new Promise(resolve => rl.question(query, ans => {
        rl.close();
        resolve(ans);
    }))
}



var watcher;
let watchPath = path.join(__dirname, 'config');
console.log(watchPath);
watcher = fs.watch(watchPath)
watcher.on('change', (event, fileName)=> {
    console.log(event);
    console.log(fileName);
    if(fileName == 'asset-config.js') {
      targetWindow.webContents.send('ASSET_UPDATE', fileName);
    }
  })
  console.log('asset watcher activated');



var result = promptUser("press [Enter] to terminate program.")

Retro: Building a Motorola 6800 Computer Part 1

I was cleaning out a room and I came across a box of digital components. Among these components were a few ICs for microcontrollers and microprocessors. Seeing these caused me to revisit interest that I had in computer hardware during a time prior to me deciding on the path of a Software Engineer. I decided to make a simple, yet functional computer with one of the processors. I selected the Motorola 6808 from what was available. There was a more capable Motorola 68K among the ICs, but I decided on the 6808 since it would require less external components and would be a great starting point for building something. It could make for a great teaching aid for understanding some computer fundamentals.

MC6800 Series Hello World on YouTube

Hello World

The first thing I want to do with it is simple. I just want to get the processor in a state where it can run without halting. This will be my Hello World program equivalent. Often times with Hello World programs, the goal is simply to produce something that compiles runs without failing, and performs some observable action. Hello World programs validate that one’s build system is properly configured to begin producing something. The program itself is trivial.

About the 6800

This processor family is from before my time, initially made in 1974. The MC6800 series of processors comes in a few variants. They differ on their amount of internal ram, stand-by capabilities, and clock speed. These are small variations. I’m using the MC6808, but will refer to it as a 6800 since most of what I write here is applicable to all of these processors. This 8-bit processor has only a few registers to track, a 16-bit address line, and a few control lines. List any processor, it has a program counter and stack pointer. It also has an index register, and a couple of 8-bit accumulators. The Index, stack, and program counters are all 16-bit while the two accumulators are 8-bit.

The processor only natively performs integer math operations. But there is a library for floating point operations. In times past it had been distributed as an 8K ROM. But the source code for this library is readily available and could be place on someone’s own ROM. You can find the source code on GitHub.

MC6800 Block Diagram Image Credit: Wikipedia.org

Instruction Set

This processor has an instruction set of only 72 instructions. The instructions + operands range with a usual size of between 1 to 3 bytes. At this size and simplicity, even putting together a simple program without an assembler could be done. Many instructions are variations on the same high-level operation with a different addressing mode. For my task goal, I don’t need to get deep into understanding of the instruction. I just needed to know what is a 1 byte operation that I could do without any additional hardware or memory needed. Many processors support an instruction often called nop, standing for “No Operation.” This instruction, as its name suggest, does nothing beyond take up space. My plan was to hard-wire this instruction into the system. This would let it run without any RAM and without causing any faults or halting conditions.

For this processor, the numerical value for the nop instruction is 0x01. This is an easy encoding to remember. To wire this instruction in the circuit, I only need to connect the least significant bit of the processor’s data line to a high signal and tie the other ones to a low signal.

Detecting Activity

It is easy to think of a processor that is only executing nop instructions as doing nothing at all. This isn’t the case though. The processor is still incrementing its program bus. As it does, it is asserting the new address over the processor’s address lines to specify the next instruction that it is trying to fetch. Some output status lines will also indicate activity. The R/!W line will indicate read operations, the BA (Bus Address) line will be high when ever the processor isn’t halted, ant the VMA line will be high when the processor is trying to asset an address on the address bus. The processor also responds to some input lines. There are three input lines that have an effect on the processor when they are in the low state. RESET, HALT, and IRQ all effect execution. I’ll need to ensure those are tied low. Most important of all, the processor needs to receive a clock signal within an acceptable range. The clock signal is necessary for the processor to coordinate it’s actions . If the clock signal is too high or too low, then the process might not function correctly. That said, I’m going to intentionally try to run the processor at a rate that is lower than what is on the spec sheet for reasons to be discussed.

As the processor is running, I should be able to monitor what’s going on by monitoring a few lines, especially on the address line. If I connect light emitting diodes (LEDs) to the address lines then I should observe whether each connection is in a high or low state by seeing which LEDs are on or off. But with the processor running at a clock speed of 1MHz – 2MHz, the processor could go through its entire address space at a rate faster than I can perceive. If I run the clock at a reduced speed, then I might make the processor progress slow enough so that I can watch the address lines increment. To achieve this, I’m going to make a clock circuit and put the output through a counter IC. If you are familiar with digital counting circuits, you know that each binary digit will be changing at half the speed of the digit before it. I can use the output of the circuit to get the clock running at 1/2, 1/4, 1/8,…,1/256. I can get the clock into the kilohertz range, which would be slow enough to see the address lines increment.

The Circuit

For the clock circuit, I have a 4MHz crystal wired into a circuit with some inverters, resisters, and capacitors. I take the output of that and pass it through another inverter before passing it on to the processor (or the counter between the processor and clock).

For the processor, most of the work is connecting LEDs with resistors to limit the current. Additionally I’ve for the instruction 1 wired to the data bus. With this wired, the only thing the system needs is power.

The Outcome

I’m happy to say that this worked. The processor started running and I can see the address bus values increasing through the LEDs on the most significant bits.

Next Steps

Now that I have the processor in a working state, I want to replace the hard-wired instruction with an EPROM and add RAM. Once I’m confident that all is well with the EPROM and RAM then I’ll add some interfaces for the outside world. While the parts that I think that I’ll need are generally out of production (though there are some derivative processors still available new) used versions are available for only a few dollars. Overall though this is a temporary diversion. Once it is developed to a certain point, it will be shelved, but that’s not the end of my hardware exploration. There are some things I’d like to do with some ARMs processors (likely an STMF32 arm processor). Many of the ARMs processors I’ve looked at are fairly complete system-on-a-chip components and don’t require a lot of hardware to get them to their minimal working state beyond a clean power supply.

Resources

One of the nice things about dabbling in Retro Computing is that there are plenty of sources available for the hardware. If you find this interesting and want to try some things out yourself, here are some resources that may be helpful.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Xamarin: “The Application cannot be launched because it is not installed”

Working on a Xamarin project for iOS from a Windows PC I ran into a situation where I could no longer debug the application. There had been no changes in source code from when I could debug to when I could not. A search for the error took me to other places where the problem had been discussed but not resolved. While I’ve been able to resolve the problem for myself, the other discussions were closed and I couldn’t place a resolution there. In the absence of another place to put this solution, I’m hosting it myself.

The more complete text of the error is as follows.

The application 'MyApplication' cannot be launched or debugged because it's not installed The app has been terminated.

Ofcourse, MyApplication would have the name of your application if you encounter this. While I don’t know what causes it, resolve it is a simple matter of erasing files. For my Xamrin project I’m using Visual Studio Community 2022 on a Windows Machine and communicating with an M1 Mac for compilation. On the M1, I had to navigate to the path $HOME/Library/Caches/Xamarin/mtbs/builds/ and erase the files and folders there. Returning to my solution on Windows, I got some other error about files not being found that was resolved by manually selecting dependency projects and recompiling those. After that, I was about to compile and debug the project like I could before.

I’m not sure what causes this error. I would have liked to have looked into it further. But delivery deadlines do not allow further examination. That said, there have been a few other low-frequency errors that I’ve encountered that are resolved by simply clearing this folder.

I hope that this solution is helpful to someone.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Enterprise Apple Certificates and Expiration

I recently explained the expiration behaviour of Apple Distribution certificates to someone, and thought it was worth sharing.

I often work on iOS applications signed with an Enterprise certificate. Applications signed with these certificates can be distributed directly to the device, such as through a Mobile Device Manager or through the browser. They cannot be distributed through the app store. These applications are signed with a distribution certificate. The Distribution Certificate can last up to one year, but may expire sooner. The distribution certificate will not last beyond the expiration of the account. If a app were signed by an account that has 7 months until renewal is needed, then the distribution certificate will also expire in 7 months.

Usually, this hasn’t been a problem for me. Many of the applications that I work on are either to be used for a predefined time period, such as for a holiday event, and then get shelved. Or they are applications that are receiving updates, in which case they will occasionally get new distribution certificates. I had a client that requested an iOS application be signed such that it would not expire. Someone in the development department for the client had resigned the application and redeployed it when it reached its first expiration period. But he wanted to be independent of their development department all together.

Unfortunately, this is not an option for iOS apps. The only way to have a version of the application that is immune to expiration would be to run it on an operating environment that doesn’t demand apps be signed with certificates that expire in a year or less. That is an option with Windows and Android, but not with iOS. For the best situation with iOS one needs an Mobile Device Manager (MDM). With an MDM, there is the option of making an updated distribution profile and pushing that out to the devices. Without the MDM then rebuild-and-redeploy is the only option.

This may be something that you’d like to consider when choosing hardware for a solution within an organization. iOS hardware is consistent in its form, performance, so on. While Android offers more openness, the variances in hardware is both an advantage and a disadvantage. I appreciate the ability to be able to make an app and install it to an Android device very quickly. OfCourse, the ability to do this easily also comes with the potential of bad actors doing the same. The barrier to getting malicious code on an iOS device is a bit higher.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet