My First Infrared Photograph 📷

I have a DSLR that has been modified for Infrared Photography. Digital cameras usually have filters that will block out light that is outside of the visible spectrum. Without these filters, though we cant see this light, the camera’s sensor will still respond to it. If you have an IR remote you can test this out yourself by viewing the emitter end through the camera on your phone and holding down a button on the remote.

The world looks different when you biew the IR or UV light reactions with objects. There are elements of an object that may be invisible until you view them in another spectrum, and elements that might disappear all together. I wanted to explore this more, which is why I have this modified camera. There are a few things that I’ve learned along the way. Once a camera is modified for IR shooting, it’s not very good to use for regular photography. You won’t want to take family photos or vacation pictures with such a modified camera. The viewfinder on the DSLR also becomes useless for some shooting situations; since our eyes cannot see IR light, when some filters are applied to the camera, the view through the viewfinder looks black. Instead, we must enable a live view on the camera’s display to preview the image.

The IR blocking filter being removed from the camera isn’t the only adjustment that needs to be done. Better results come from also adding a filter to the lens that blocks some of the visible colors of light. Post-processing of the photo will be necessary.

I’ve been wanting to take a photograph with the camera for the last week. But obligations to others and my work schedule preventing me from being available for some of the prime hours of the day where the sun would be where I would like for it to be. Today, between meetings, I had a chance to run outside, snap a couple of photographs of a flower, and run back inside. The image that shows at the top of this post is one of the results. That image was taken with a filter on the lens that only lets IR light pass. If I remove the filter, I end up with a picture like the following. Removing the filter and allowing visible light to come through, I get a photo that looks partially desaturated.Though color is present, the influence of the IR on the photo is discernable.

While out at “the farm” I took some pictures of some chickens. As appears to be the case with many things that are red, in infrared they appear closer to white. The chicken’s comb’s look especially white.

I’ll be taking more occasional photographs using this camera. When I do, you’ll generally be able to find them on my Instagram page.

This a lot more for me to learn. I hope to have some interesting shows to post.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Rechargeable USB-C Batteries

Over the past year, I’ve transitioned some of my devices from conventional to rechargeable batteries. I’ve used rechargeable batteries before and had generally been disappointed with them. The need for a separate charger for each battery type sometimes meant extra hardware to keep up with.

With these batteries, one of the main advantages is that they charge over a USB-C cable. Though they came with USB-A to USB-C cables for charging, I generally use the cables that and power supplies that I already have for my phones.

These don’t last as long as a conventional battery, but I’m okay with that since the charging experience is no-fuss. I often forget to turn off my voice recorder and run through batteries in it quickly. With these, it is less of a concern.

Presently, I’m using AA, AAA, and 9-Volt batteries. You can find all of these on Amazon among other places. The affiliate links for the ones that I purchased are below.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Triangle Collision (2D)

With some of the free time I had, I decided to remake a video game from my childhood. Though the game was rendered with 3D visuals, it was essentially a 2D game. I’ll need to detect collisions when two actors in the game occupy the same footprint. The footprints will usually be rectangular. But these rectangles could be oriented in any way. Detection of orthognal overlapping bounding boxes is a simple matter of checking if some pointes are within range. But if they are not oriented parallel/perpendicular to each other, then a different check must be done.

Rectangles can be constructed with two triangles. The solution to detecting overlapping rectangles is built upon finding intersection with a triangle. Let’s look at point-triangle intersection first. Let’s visualize some overlapping and non-overlapping scenarios.

Examining this visually, you can intuitively state whether any selected pair of triangles or points intersect with each other. But if I provided you only with the points that make up each triangle, how would you state whether overlap occurs? An algorithmic solution is needed. The two solutions I show here did not originate with me. I found solutions on StackOverflow and RosettaCode. But what I present here is not a copy and past of the solutions. They’ve been adapted some to my needs.

For my points and triangles, I’ll use the following structures.

struct Point
{
	float x;
	float y;	
};

struct Triangle
{
	Point pointList[3];
};

There are lots of ways to detect whether a point iw within a triangle. Before writing my own, I found a simple one on GameDev.net. This is an adjusted implementation.

float sign(Point p1, Point p2, Point p3)
{
	return (p1.x - p3.x) * (p2.y - p3.y) - (p2.x - p3.x) * (p1.y - p3.y);
}

bool pointInTriangle(Point pt, Triangle tri)
{
	float d1, d2, d3;
	bool has_neg, has_pos;
	d1 = sign(pt, tri.pointList[0], tri.pointList[1]);
	d2 = sign(pt, tri.pointList[1], tri.pointList[2]);
	d3 = sign(pt, tri.pointList[2], tri.pointList[0]);
	has_neg = (d1 < 0) || (d2 < 0) || (d3 < 0);
	has_pos = (d1 > 0) || (d2 > 0) || (d3 > 0);
	return !(has_neg && has_pos);
}

Usage is simple. Passing a point and a triangle to the function pointInTriangle results in the return of a bool value that is true if the point interacts with the triangle.

Point p1 = { 5, 7 };
Point p2 = { 3, 4 };
Point p3 = { 3, 3 };

Triangle t1 = { { { 2, 2 }, { 5, 6 }, { 10, 0 } } };

std::wcout << "Point p1 " << (pointInTriangle(p1, t1) ? "is" : "is not") << " in triangle t1" << std::endl;
std::wcout << "Point p2 " << (pointInTriangle(p2, t1) ? "is" : "is not") << " in triangle t1" << std::endl;
std::wcout << "Point p3 " << (pointInTriangle(p3, t1) ? "is" : "is not") << " in triangle t1" << std::endl;

Having Point-Triangle intersection is great, but I wanted Triangle-Triangle intersection. I decided to use an algorithm from RosettaCode. The code I present here isn’t identical to what was presented there, though. Some adjustments have been made, as I prefer to avoid using explicit pointers to functions. My definition of Triangle and Point are expanded to accomodate the implementation’s use of indices to access the parts of the triangle. By unioning the two definitions, I can use either notation for accessing the members.

struct Point
{
	union {
		struct {
			float x;
			float y;
		};
		float pointList[2];
	};
};

struct Triangle
{
	union {
		struct {
			Point p1;
			Point p2;
			Point p3;
		};
		Point pointList[3];
	};	
};

The algorithm makes use of the determinate function, an operation from matrix math. I’m going to skip over explaining the concept of determinants. Explanation of matrix operations deserves its own post.

inline double Det2D(const Triangle& triangle)
{
	return +triangle.p1.x* (triangle.p2.y- triangle.p3.y)
		+ triangle.p2.x * (triangle.p3.y - triangle.p1.y)
		+ triangle.p3.x * (triangle.p1.y - triangle.p2.y);
}

Another key attribute of this implementation is that it optionally usually enforces the angles of a triangle being provided in a counter-clockwise order. If they are always passed in the same order, there is an opportunity for faster execution of the code. I chose general usability over speed. But I did mark the code with attributes to suggest that the compiler optimize for them being passed in that order.

void CheckTriWinding(const Triangle t, bool allowReversed = true)
{
	double detTri = Det2D(t);
	[[unlikely]]
	if (detTri < 0.0)
	{
		if (allowReversed)
		{
			Triangle tReverse = t;
			tReverse.p1 = t.p3;
			tReverse.p3 = t.p1;
			tReverse.p2 = t.p2;
			return CheckTriWinding(tReverse, false);
		}
		else throw std::runtime_error("triangle has wrong winding direction");
	}
}

The original algorithm had a couple of functions that were used to check for boundary collisions or ignore boundary collisions. I have expanded them from 2 functions to 4 functions. Whereas the original structured often as three points that are arbitrarily passed to a function, I prefer to pass triangles as a packaged structure. There is a place in the algorithm where a calculation is formed on a triangle formed by two points of one triangle and one point of the other triangle. The second forms of these functions accept points and will create a triangle from them.

bool BoundaryCollideChk(const Triangle& t, double eps)
{
	return Det2D(t) < eps;
}

bool BoundaryCollideChk(const Point& p1, const Point& p2, const Point& p3, double eps)
{
	Triangle t = { {{p1, p2, p3}} };
	return BoundaryCollideChk(t, eps);
}

bool BoundaryDoesntCollideChk(const Triangle& t, double eps)
{
	return Det2D(t) <= eps;
}

bool BoundaryDoesntCollideChk(const Point& p1, const Point& p2, const Point& p3, double eps)
{
	Triangle t = { {{p1, p2, p3}} };
	return BoundaryDoesntCollideChk(t, eps);
}

With all of those pieces in place, we can finally look at the part of the algorithm that returns true or false to indicate triangle overlap. The algorithm checks each point of one triangle to see if it is on the interior or external side of each edge of the triangle. If all of the points are on the external side, then no collision has been connected. Then it does the same check swapping the roles of the triangles.If any of the points are not external to the opposing triangle, then the triangles collide.

bool TriangleTriangleCollision(const Triangle& triangle1,
	const Triangle& triangle2,
	double eps = 0.0, bool allowReversed = true, bool onBoundary = true)
{
	//Trangles must be expressed anti-clockwise
	CheckTriWinding(triangle1, allowReversed);
	CheckTriWinding(triangle2, allowReversed);	

	//For edge E of trangle 1,
	for (int i = 0; i < 3; i++)
	{
		int j = (i + 1) % 3;
		[[likely]]
		if (onBoundary)
		{

			//Check all points of trangle 2 lay on the external side of the edge E. If
			//they do, the triangles do not collide.
			if (BoundaryCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[0], eps) &&
				BoundaryCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[1], eps) &&
				BoundaryCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[2], eps))
				return false;
		}
		else
		{
			if (BoundaryDoesntCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[0], eps) &&
				BoundaryDoesntCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[1], eps) &&
				BoundaryDoesntCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[2], eps))
				return false;
		}

		if (onBoundary)
		{

			//Check all points of trangle 2 lay on the external side of the edge E. If
			//they do, the triangles do not collide.
			if (BoundaryCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[0], eps) &&
				BoundaryCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[1], eps) &&
				BoundaryCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[2], eps))
				return false;
		}
		else
		{
			if (BoundaryDoesntCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[0], eps) &&
				BoundaryDoesntCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[1], eps) &&
				BoundaryDoesntCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[2], eps))
				return false;
		}
	}
	//The triangles collide
	return true;
}

The basic usage of this code is similar to the point collision. Pass the two triangles to the function

Triangle t1 = { Triangle{Point{3, 6}, Point{6, 5}, Point{6, 7} } },
			t2 = { Triangle{Point{4, 2}, Point{1, 5}, Point{6, 4} } },
			t3 = { Triangle{Point{3, 12}, Point{9, 8}, Point{9, 12} } },
			t4 = { Triangle{Point{3, 10}, Point{5, 9}, Point{5, 13} } };

auto collision1 = TriangleTriangleCollision(t1, t2);
std::wcout << "Triangles t1 and t2 " << (collision1 ? "do" : "do not") << " collide" << std::endl;
auto collision2 = TriangleTriangleCollision(t3, t4);
std::wcout << "Triangles t3 and t4 " << (collision2 ? "do" : "do not") << " collide" << std::endl;
auto collision3 = TriangleTriangleCollision(t1, t3);
std::wcout << "Triangles t1 and t3 " << (collision3 ? "do" : "do not") << " collide" << std::endl;

With that in place, I’m going to get back to writing the game. I can now check to see when actors in the game overlap.

Finding the Code

You can find the code in the GitHub repository at the following address.

https://github.com/j2inet/algorithm-math/blob/main/triangle-intersection/triangle-intersection.cpp


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

No, T-Mobile is Not Going to Fine Customers for Disliked SMS

Bottom Line: A rule affecting robo-texting (which is done without a phone) was mistaken for a rule governing/punishing users of T-Mobile phones. T-Mobile has not applied any new fines to customers for their SMS content.

Update 2023 December 31 11:58PM EST

Since I’ve made this posts, there’s been a number of other statements and posts about this misunderstanding that I think are worth considering. The Associated Press published an article about this misunderstanding/misinformation

“The change only impacts third-party messaging vendors that send commercial mass messaging campaigns for other businesses,” the company wrote in a statement emailed to The Associated Press.

Associated Press

There is now a community note under the account of one of the people that accounts for many views of this item of misinformation on Twitter/X.

…Bandwidth the company the user is referring does not provide consumer service. These specific terms of service are for commercial/enterprise users of the T-Mobile network. This does not directly apply to P2P (Peer to peer) messaging.

Twitter/X Community note

There was also a post in the T-Mobile Community forums by a user that received a response from a T-Mobile Community Manager.

These changes only apply to third-party messaging vendors that send commercial mass messaging campaigns for other businesses. The vendors will be fined if the content they are sending does not meet the standards in our code of conduct, which is in place to protect consumers from illegal or illicit content and aligns to federal and state laws. 

HeavenM, T-Mobile Community Manager

Original Post

I watched misunderstanding get legs and spread pretty far during the Christmas holiday weekend. The gist of the misinformation is that T-Mobile is going to fine customers hundreds or thousands of dollars for sending text messages of which T-Mobile doesn’t improved. This misunderstanding appears to be derived from a reading of a Business Code of Conduct, but without an understanding of the terms involved or the partnerships between various companies. I’ll try to explain both in a moment.

Consumer and Non-Consumer Messages

If you have a T-Mobile phone and are sending text messages, those are consumer text messages. If you start a marketing campaign to send out mass SMS messages, create a 2FA service that sends out messages, or are using some other API access, those are non-consumer messages. A basic understanding of how consumer messages works is common, but not so for non-consumer messages. Let’s dig into that a bit more.

Non-Consumer messages are often sent through an API. These messages are sometimes labeled as A2P messages, which means “Application to Person.” The origin of these messages is not a phone. T-Mobile, AT&T, and Verizon provide the ability to post messages into their infrastructure for delivery. How do you get access to this functionality? Generally, you don’t. They don’t let everyone have access to these services. Instead, there are a number of companies with which they establish agreements and have granted access. If you want to use these services, you would establish a connection with one of these companies and they manage many of the other details of what needs to occur. Here are some names of companies that provide such services.

  • Amazon Web Services
  • Bandwidth
  • BulkSMS
  • Call Hub
  • EZTexting
  • Hey Market
  • MessageBird
  • Phone.com
  • SalesMSG
  • Textr
  • Vonage
  • Wire2Air
  • Zixflow

There are a lot of other companies. You can find a larger list here. T-Mobile also has a document explaining the difference between consumer and non-consumer messaging.

As far as a consumer knows the messages are coming from a 10 digit phone number. These phone numbers may be referred to as A2P 10DLC (Application to Person 10 Digit Long Code). When a new messaging campaign is started, the number associated with the campaign must be registered. The number being registered and associated with an entity, brand, or campaign. Unlike short code messages, if a business also needs to allow customers to call them, they have the option of setting up a voice line associated with the same phone number.

The entities and phone numbers may also be assigned a trust rating. Entities that have a higher trust rating may be granted more throughput on a carrier’s network. If an entity’s trust rating become low, their granted capacity might be lowered or their messages may be disallowed all together. Entities earn a reputation.

What Are These Rules Restricting?

In a nutshell, the rules published by T-Mobile, AT&T, and Verizon disallow text messaging campaigns for unlawful material, including material that may be lawful in some states but not others, scam messages, spam, phishing attempts and impersonation. Some carriers may specifically call out other types of material, but the same general characterization of restrictions apply. The following is what was posted by Bandwidth about the notice (Update 2024 January 2: Bandwidth.com has since restricted viewing the notice. A screenshot of the notice can be viewed here).

T-Mobile is instituting three new fees for non-compliant A2P traffic sent by non-consumers that result in a Severity-0 violation. A Sev-0, (Severity-0) represents the most harmful violation to consumers and is the highest level of escalation with which a carrier will engage with Bandwidth. This applies to all commercial, non-consumer, A2P products (SMS or MMS Short Code, Toll-Free, and 10DLC) that traverse T-Mobile’s network.

With what I’ve shared so far, you may be able to recognize this as something that is not directed at regular customers that are using T-Mobile. The false information I encountered on the change misidentifies the affected partiers as subscribers of the phone service. Let’s dig into what a Sev-0 violation is.

  • Phishing messages that appear to come from reputable companies
  • Depictions of violenge, messages engaging in harassment, defamation, deception, and fraud
  • Adult Material
  • High-Risk content that generates a lot of user complaints, such as home offers, payday loans, and gambling content.
  • Sex, Hate, Alcohol, Firearms, and Tabaco related content (SHAFT)

There is other content in this category. The above isn’t exhaustive.

Bandwidth list as having the highest fees messages related to phishing and social engineering at 2,000 USD per violation. Second, at a 1,000 USD fine is unlawful content such as controlled substances or substances not lawful in all 50 states. The lowest fine that Bandwidth mentions is 500 USD is for violations of SHAFT or messages that don’t follow state or federal regulations.

Overall, this looks to be a move that may motivate A2P partners to make more efforts to filter out certain type of content that is at least generally annoying if not worst.

What About the Other Carriers?

In the USA there are three nation-wide carriers. AT&T, T-Mobile, and Verizon offer service across the nation. I don’t know if Verizon or AT&T have fines associated with violations, but they do have a code of conduct for A2P providers. If you’d like to read their code of conduct documentations, you can find them here.

There’s a lot of overlap in their rules. This may come as no surprise, especially to those familiar with CTIA. CTIA is a wireless trade organization representing carriers in the USA, supplies, and manufacturers of wireless products. They have been around since the mid-1980s. Current members of CTIA include AT&T, T-Mobile, Verizon, and US Cellular. You can find a list of members here.

There is a lot more that could be said about how these messaging services work. I may detail it further one day. But for now, the main take-away is that there’s a popular misconception about the changes that T-Mobile is said to be implementing that are recognized as a mistaken interpretation once one has a casual familiarity with some of the terms involved.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Updating Christmas Gifts After they Are Sent

Christmas is around the corner. Among the items of interest this year is the Analogue Pocket. The Pocket is an FPGA based device that can hardware emulate a lot of older game consoles along with having some games of its own. I’m getting one prepared for someone else. But I also need to send the device soon to ensure that it arrives at its destination before Christmas. This creates a conflict with getting more games loaded while also shipping it on time. No worries, I can satisfy both if I send the device with something to upload the content.

This is done by a lot of physical game releases, when there is a zero-day “patch” for a game or when the disc is only a license for the game, but the actual game and it’s content are only available online. I’ll be shipping the memory card for the device with an call to action to run the “game installer” on the memory card. After the card is mailed, I can take care of preparing the actual image. The game installer will reach out to my website to find a list of files to download to the memory card, zip files to decompress, or folders to create.

Safety

Though I’m the only one that will be making payloads for my downloader to run, I still imagined some problem scenarios that I wanted to make impossible or more difficult. What if someone were to modify the download so that it were to target writing files to a system directory or some other location? I don’t want this to happen. I’ve made my downloader so that it can only write to the folder in which it lives and to subfolders. The characters that are needed to get to some parent level or to some other drive, if present in the download list, will intentionally cause the application to crash.

Describing an Asset

I started with describing the information that I would need to download an asset. An asset could be a file, a folder, or a zip file. I’ve got an enumeration for gflagging these types.

    public enum PayloadType
    {
        File,
        Folder,
        ZipFile,
    }

Each asset of this type (which I will call a “Payload” from hereon) can be described with the following structure.

    public class PayloadInformation
    {
        [JsonPropertyName("payloadType")]
        [JsonConverter(typeof(JsonStringEnumConverter))]
        public PayloadType PayloadType { get; set; } = PayloadType.File;

        [JsonPropertyName("fileURL")]
        public string FileURL { get; set; } = "";

        [JsonPropertyName("targetPath")]
        public string TargetPath { get; set; } = "";

    }

For files and zip archives, the FileURL property contains the URL to the source. The TargetPath property contains a relative path to where this payload item should be downloaded or unzipped to. A download set could have multiple assets. I broke up the files for the the device that I was sending into several Zip files. Sorry, but in the interest of not inundating my site with several people trying this out, I’m not exposing the actual URLs for the assets here. The application will be grabbing a collection of these PayloadInformation items.

    public class PayloadInformationList: List<PayloadInformation>
    {
        public PayloadInformationList() { }
    }

The list of assets is placed in a JSON file and made available on a web server.

[
  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Pocket.zip",
    "targetPath": "",
    "versionNumber": "0"
  },


  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_1.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_2.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_3.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_4.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },
  {
    "payloadType": "Folder",
    "targetPath": "Memories/Save States",
    "versionNumber": "1"
  },
  {
    "payloadType": "Folder",
    "targetPath": "Assets",
    "versionNumber": "1"
  }
]

I might use some form of this again someday. So I’ve placed the initial URL from which the download list is retrieved in the Application Settings. In the compiled application, the application settings are saved in a JSON file that can be altered with any text editor.

About the Interface

The user interface for this application is using WPF. I grabbed a set of base classes that I often use with WPF applications. It made this using a build of Visual Studio that was just released a month ago that contains significant updates. I found that my base class nolonger works as expected under this new version of Visual Studio. That’s something I will have to tackle another day, as I think that there is a change in the relationship between Linq Expressions and Member Expressions. For now, I just used a subset of the functionality that the classes offerd. Most of the work done by the application can be found in MainViewModel.cs.

To retrieve the list of assets, I have a method named GetPayload() that downloads the JSON containing the list of files and deserializing it. Though I would usually use JSON.Net for serialization needs, I used the System.Text.Json.Serializer for my needs. Here, I also check the paths for characters indicating an attempt to go outside of the application’s root directory and thrown an exception of this occurs.

async Task<List<PayloadInformation>> GetPayloadList()
{
    HttpClient client = new HttpClient();
    var response = await client.GetAsync(DownloadUrl);
    var stringContent = await response.Content.ReadAsStringAsync();
    var payloadList = JsonSerializer.Deserialize<List<PayloadInformation>>(stringContent);
    payloadList.ForEach(p =>
    {
        if (!String.IsNullOrEmpty(p.TargetPath))
        {
            if (p.TargetPath.Contains("..") || p.TargetPath.Contains(":") ||
                p.TargetPath.StartsWith("\\") || p.TargetPath.StartsWith("/")
            )
           {
               throw new Exception("Invalid Target Path");
            }
        }
    });
    return payloadList;
}

Within MainViewModel::DownloadRoutine() (which runs on a different thread) I step through the payload descriptions one at a time and take action for each one. For folder items, the application just creates the folder (and parent folders if needed). For files, the file is downloaded from the web source to a temporary file on the computer. After it is completely downloaded, it is moved to the final location. This reduces the chance of there being a partially downloaded file on the memory card. The process performed for Zip files is a variation of what is done for files. The zip file is downloaded to a temporary location, and then it is decompressed from that temporary location to its target folder.

while (_downloadQueue.Count > 0)
{
    Phase = "Downloading...";
    var payload = _downloadQueue.Dequeue();
    DownloadProgress = 0;
    CurrentPayload = payload;
    switch (payload.PayloadType)
    {
        case PayloadType.File:
            {
                Phase="Downloading";
                var response = client.GetAsync(payload.FileURL).Result;
                var content = response.Content.ReadAsByteArrayAsync().Result;
                var tempFilePath = Path.Combine(TempFolder, payload.TargetPath);
                var fileName = Path.GetFileName(payload.FileURL);
                File.WriteAllBytes(tempFilePath, content);
                File.Move(tempFilePath, payload.TargetPath, true);
            }
            break;
        case PayloadType.Folder:
            {
                Phase = "Creating Directory";
                var directoryName = payload.TargetPath.Replace('/', Path.DirectorySeparatorChar);
                var directoryInfo = new DirectoryInfo(directoryName);
                if (!directoryInfo.Exists)
                {
                    directoryInfo.Create();
                }
            }
            break;
        case PayloadType.ZipFile:
            {
                WebClient webClient = new WebClient();
                webClient.DownloadProgressChanged += DownloadProgressChanged;
                webClient.DownloadFileCompleted += WebClient_DownloadFileCompleted;
                var tempFilePath = Path.Combine(TempFolder, Path.GetTempFileName()) + ".zip";
                var fileName = Path.GetFileName(payload.FileURL);
                var directoryName = payload.TargetPath.Replace('/', Path.DirectorySeparatorChar);

                if (String.IsNullOrEmpty(directoryName))
                {
                    directoryName = ".";
                }
                var directoryInfo = new DirectoryInfo(directoryName);
                if (!directoryInfo.Exists)
                {
                    directoryInfo.Create();
                }
                webClient.DownloadFileAsync(new Uri(payload.FileURL), tempFilePath);
                _downloadCompleteWait.WaitOne();
                Phase = "Decompressing";
                System.IO.Compression.ZipFile.ExtractToDirectory(tempFilePath, directoryInfo.FullName,true);

            }
            break;
        default:
            break;
    }
}

Showing Progress

The download process can take a while. I thought it would be important to make known that the process was progressing. The primary item of feedback shown is a progress bar. As long as it is growing in size, it’s known that data is flowing. I used the WebClient::DownloadProgressChanged event to get updates on how much of a file has been downloaded and updating the progress bar accordingly.

void DownloadProgressChanged(Object sender, DownloadProgressChangedEventArgs e)
{
    // Displays the operation identifier, and the transfer progress.
    System.Diagnostics.Debug.WriteLine("{0}    downloaded {1} of {2} bytes. {3} % complete...", 
                        (string)e.UserState, e.BytesReceived,e.TotalBytesToReceive,e.ProgressPercentage);
    DownloadProgress = e.ProgressPercentage;
}

Handling Errors

Theres a good bit of error handling that is missing from this code. I made the decision to do this because of time. Ideally, the program would ensure that it has a connection to the server with the source files. This is different than checking whether there is an Internet connection. The computer having an Internet connection doesn’t imply that it has access to the files. Nor does having access to the files imply generally having access to the Internet. Having used a lot of restricted networks, I’m of the position that just making sure there is an Internet connection too possibly not be sufficient.

It is also possible for a download to be disrupted for a variety of reasons. In addition to detecting this, implementing download resumption would minimize the impact of such occurrences.

If I come back to this application again, I might first problem each of the reasources with an HTTP HEAD requests to see whether they are available. Such a requests would also make known the sizes of the files, which could be used to implement a progress bar for the total progress. Slow downloads, though not an error condition, could be interpreted as an error. Sufficiently informing the user of what’s going on can help prevent it from being thought of as such.

The Code

If you want to grab the code for this and use it for your own purposes, you can find it on GitHub.

https://github.com/j2inet/filedownloader


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Iterating Maps in C++

Though I feel like it has become a bit of a niche language, I enjoy coding with C++. It was one of the earliest languages I learned while in grade school. In one of the projects I’m playing with now, I need to iterate though a map. I find the ways in which this has evolved over C++ versions to be interesting and wanted to show them for comparison. I’m using Visual C++ 2022 for my IDE. It supports up to C++ 20. Though it defaults to C++ 14.

Chaning the C++ Version

To try out the code that I’m showing here, you’ll need to know how to change the C++ version for your compiler. I’ll show how to do that with Visual C++. If you are using a different compiler, you’ll need to check your references. In a C++ project, right-click on the project from the Solutions Explorer and select “Properties.” From the tree of options on the left select Configuration Properties->C/C++->Language. On the right side, the option called C++ Language Standard will let you change the version. The options there at the time that I’m writing this are C++ 14 Standard, C++ 17 Standard, and C++ 20 Standard.

Examples on How to Iterate

A traditional way that you will see for iterating involves using the an iterator object for a map. If you look in existing C++ source code, you are likely to encounter this method since it has been available for a long time and is still supported in newer C++ versions. This follows the same pattern you will see for iterating through other Standard Template collections. Though its recognizable to those that use the Standard Template Library in general, it does use pointers which have some risks associated with them. Note that I am using the C++ 11 auto keyword for the compiler to infer the type and make this code more flexible.

for (auto map_iterator = shaderMap.being(); map_iterator != shaderMap.end(); map_iterator++)
{
     auto key = map_iterator->first;
     auto value = map_iterator->second;
}

A safer method would avoid the use of pointers all together. With this next version we get an object on which we can directly read the values. I use references to the item. In optimized compilers the reference ends up being purely notational and doesn’t result in an operation. I also think this looks cleaner than the previous example.

for(auto mapItem: shaderMap)
{
     auto& key = mapItem.first;
     auto& value = mapItem.second;
}

The last version that I’ll show works in C++ 17 and above. This makes use of structured bindings. In the for-loop declaration, we can name the fields that we wish to reference and have variables for accessing them. This is the method that I prefer. It generally looks cleaner.

for (auto const& [key, blob] : shaderMap)
{

}

Why not just show the “best” version?

Best is a bit subjective, and even then, it might not be available to every project. You might have a codebase that is using some other than the most recent version of the C++ language. Even if your environment does support changing the language, I wouldn’t select arbitrarily doing so. Though the language versions generally maintain backwards compatibility, changing the language is making a sweeping change where, for a complex project, could have unknown effects. If there is a productivity reason for making the change and the time/resources are available for fully testing the application, then proceeding might be worth considering for you. But I discourage giving into temptation to use the newest version only because it is newer.

Error Explanation: Microsoft C++ exception: Poco::NotFoundException

Working on a Direct3D 11 program, something wasn’t rendering correctly. I started to examine the debug output and came across some exceptions. These exceptions had nothing to do with my rendering error, but I wanted to know what was causing them.

Exception thrown at 0x00007FFE413FCF19 in D3DAppWindow.exe: Microsoft C++ exception: Poco::NotFoundException at memory location 0x0000000B3B5E27C0.
Exception thrown at 0x00007FFE413FCF19 in D3DAppWindow.exe: Microsoft C++ exception: Poco::NotFoundException at memory location 0x0000000B3B5E2800.

I traced this error back to my call to create a D3D11Device. To debug it any further, I’d have to start debugging code outside of what I wrote. The good news is if you are seeing this exception, it’s not your fault. You are likely using a NVIDIA video adapter. The bug is coming from it. The bad news is that there’s not anything that you can do about it at this moment. It’s up to NVIDIA to fix that. It may be helpful to provide information on which NVIDIA driver and OS version that you use on this NVIDIA thread.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet
Bluesky: j2inet.bsky.social

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Shared Handles in C++ on Win32

Shared pointers are objects in C++ that manage pointers. As a pointer to an object is passed around, copied, or deleted a shared pointer keeps track of how many references there are to the object that it refers to. When all references to the object are destroyed or go out of scope, the shared pointer will delete the object and free its memory. This has the effect of smart pointers in C++ acting almost like a managed memory environment. The burden on the developer to managming emory is pleasantly diminished.

The standard template library offers, among others, the class std::shared_ptr for creating shared pointers. There are some other classes, such as std::unique_ptr with special behaviours (in this case, ensuring that only one reference to the object exists). std::shared_ptr also lets the developer specify a custom delete for the object; if there is some specific behaviour needed for when an object is being deallocated, this feature could be used to support that. These are the signatures for some of the constructors that allow custom deleters

template< class Y, class Deleter> shared_ptr( Y* ptr, Deleter d );
template< class Deleter> shared_ptr( std::nullptr_t ptr, Deleter d );
template< class Y, class Deleter, class Alloc > shared_ptr( Y* ptr, Deleter d, Alloc alloc );
template< class Deleter, class Alloc> shared_ptr( std::nullptr_t ptr, Deleter d, Alloc alloc );
template< class Y, class Deleter> shared_ptr( std::unique_ptr<Y, Deleter>&& r );

Structures like this are not limited to being used only for pointers. They can be used for other resources too. My interest was in using them to manage handles for Windows objects, specificly handles. Handles are values that identify a system resource, such as a file. Their value is not for a memory address, but is a generally opaque numeric identifier. Think of it as an ID number. When the object that a handle refers to is nolonger needed, it should be freed with a call to CloseHandle().

I was working with a program written in C/C++ for Windows and writing a function to load the contents of a file. This is the original function.

vector<unsigned char> LoadFileContents(std::wstring sourceFileName)
{
    vector<unsigned char> retVal;
    auto hFile = CreateFile(sourceFileName.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile != INVALID_HANDLE_VALUE)
    {
        DWORD fileSize = GetFileSize(hFile, NULL);

        retVal.resize(fileSize);
        DWORD bytesRead;
        HRESULT result = ReadFile(hFile, retVal.data(), fileSize, &bytesRead, FALSE);
        CloseHandle(hFile);
    }
    return retVal;
}

Well, that’s not actually the original. In the original, I forgot to make the call to CloseHandle(). Forgetting to do this could lead to resource leaks in the program or the file not being available for writing later because a read handle is still open. For my end goal, this won’t be the only file that I use, nor will files be the only type of handles. I wanted to manage these in a safer way. Here, I use the std::unique_ptr to manage handles. I’ll make a custom deleter that will close a handle.

My custom deleter is implemented as a functor. A functor is a type of object that can be used as a function. Often these are used in callback operations. Functors, unlike typical functions, can also have state. In C++ functors are generally constructed by defining the operator() for the object. operator() can take any number of arguments. For my purposes, it only needs one argument. That’s the HANDLE to be closed. A HANDLE can have two values that indicate it isn’t referencing a value object. There is a constant, INVALID_HANDLE_VALUE (whose literal value is -1) and 0. To ensure CloseHandle() isn’t called on an invalid value, I need to check that the value passed is not either of these values and only call CloseHandle() if neither of these values was passed.

struct HANDLECloser
{
	void operator()(HANDLE handle) const
	{
		if (handle != INVALID_HANDLE_VALUE && handle != 0)
		{
			CloseHandle(handle);
		}
	}
};

Since there will only ever be one object accessing my file handles, I’ll be using std::unique_ptr for my file handles. With the above declaration I could begin using std::unique_ptr objects immediately.

auto myFileHandle = std::unique_ptr<void, HANDLECloser>(hFile);

That’s a lot to type though. In the interest of brevity, let’s make a declaration so that we can invoke that with less keystrokes.

using HANDLE_unique_ptr = std::unique_ptr<void, HANDLECloser);

With that in place, the previous call to initialize a unique pointer could be shortened to the following.

auto myFileHandle = HANDLE_unique_ptr(hFile);

That’s a bit more concise. Let’s add one more thing. Generally, I would be using this with the Win32 CreateFile function. Let’s make a CreateFileHandle() function that takes the same parameters as CreateFile but returns our std::unique_ptr for our file handle.

HANDLE_unique_ptr CreateFileHandle(std::wstring fileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile)
{
	HANDLE handle = CreateFile(fileName.c_str(), dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile);
	if (handle == INVALID_HANDLE_VALUE || handle == nullptr)
	{
		return nullptr;
	}
	return HANDLE_unique_ptr(handle);
}

Using these new classes that I’ve put in place,

vector<unsigned char> LoadFileContents(std::wstring sourceFileName)
{
    vector<unsigned char> retVal;
    auto hFile = CreateFileHandle(sourceFileName.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile)
    {
        DWORD fileSize = GetFileSize(hFile.get(), NULL);
        retVal.resize(fileSize);
        DWORD bytesRead;
        HRESULT result = ReadFile(hFile.get(), retVal.data(), fileSize, &bytesRead, FALSE);    
    }
    return retVal;
}

There are some other good bits of code in the project from which I took this code that I plan to share in the common weeks. Some parts are simple but useful, other parts are more complex. Come back in a couple of weeks for the next bit that I have to share.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet
Bluesky: j2inet.bsky.social

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Recompiling the V8 JavaScript Engine on Windows

Note Added 2025 March 10 – These instructions no longer work. Google has dropped support for using MSVC. It is still possible to build on Windows using Clang. But this presents new challenges, such as linking CLang binaries to MSVC binaries. More information on this change can be found in a Google Group discussion here.

Note Added 2024 September 3 – I tried to follow my own instructions on a whim today and found that some parts of the instructions don’t work. I made my way through them with adjustments to get to success.

I decided to compile the Google V8 JavaScript engine. Why? So that I could include it in another program. Google doesn’t distribute the binaries for V8, but they do make the source code available. Compiling it is, in my opinion, a bit complex. This isn’t a criticism. There are a lot of options for how V8 can be built. Rather than making available the permutations of these options for each version of V8, one could just set options themselves and build it for their platform of interest.

But Isn’t There Already Documentation on How to Do This?

There does exists documentation from Google on compiling Chrome. But there are variations from those instructions and what must actually be done. I found myself searching the Internet for a number of other issues that I encountered and made notes on what I had to do to get around compilation problems. The documentation comes close to what’s needed, but isn’t without error and deviation.

Setting Up Your Environment

Before touching the v8 source code, ensure that you have installed Microsoft Visual Studio. I am using Microsoft Visual Studio 2022 Community Edition. There are some additional components that must be installed. In an attempt to make this setup process as scriptable as possible, I’ve have a batch file that will have the Visual Studio Installer add the necessary components. If a component is already installed, no action is taken. Though the Google V8 instructions also offer a command to type to accomplish the same thing, this is where I encountered my first variation from their instructions. Their instructions assume that the name of the Visual Studio Installer command to be setup.exe (it probably was on a previous version of Visual Studio) where my installer is named vs_installer.exe. There were also additional parameters that I had to pass, possibly because I have more than one version of Visual Studio installed (Community Edition 2022, Preview Community Edition 2022, and a 2019 version).

pushd C:\Program Files (x86)\Microsoft Visual Studio\Installer\

vs_installer.exe install --productid Microsoft.VisualStudio.Product.Community --ChannelId VisualStudio.17.Release --add Microsoft.VisualStudio.Workload.NativeDesktop  --add Microsoft.VisualStudio.Component.VC.ATLMFC  --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.VC.MFC.ARM64 --add Microsoft.VisualStudio.Component.Windows10SDK.20348 --includeRecommended

popd

You may need to make adjustments if your installer is located in a different path.

While those components are installing, let’s get the code downloaded and put int place. I did the download and unpacking from powershell. All of the commands that follow were stored in a power shell script. Scripting the process makes it more repeatable and is easier to document (since the scripts are also a record of what was done). You do not have to use the same file paths that I do. But if you change them, you will need to make adjustments to the instructions when one of these paths is used.

I generally avoid placing folders directly in the root. The one exception to that being a folder I make called c:\shares. There’s a structure that I conform to when placing this folder on Windows machines. For this structure, Google’s code will be placed in subdirectories of c:\shares\projects\google. In the following script you’ll see that path used.

$depot_tools_source = "https://storage.googleapis.com/chrome-infra/depot_tools.zip"
$depot_tools_download_folder= "C:\shares\projects\google\temp\"
$depot_tools_download_path = $depot_tools_download_folder + "depot_tools.zip"
$depot_tools_path = "c:\shares\projects\google\depot_tools\"
$chromium_checkout_path = "c:\shares\projects\google\chromium"
$v8_checkout_path = "c:\shares\projects\google\"

mkdir $depot_tools_download_folder
mkdir $depot_tools_path
mkdir $chromium_checkout_path
mkdir $v8_checkout_path

pushd "C:\Program Files (x86)\Microsoft Visual Studio\Installer\"
.\vs_installer.exe install --productID Microsoft.VisualStudio.Product.Community --ChannelId VisualStudio.17.Release --add Microsoft.VisualStudio.Workload.NativeDesktop  --add Microsoft.VisualStudio.Component.VC.ATLMFC  --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.VC.MFC.ARM64 --add Microsoft.VisualStudio.Component.Windows10SDK.20348 --includeRecommended
popd

Invoke-WebRequest -Uri $depot_tools_source -OutFile $depot_tools_download_path
Expand-Archive -LiteralPath $depot_tools_download_path -DestinationPath $depot_tools_path

After this script completes running, Visual Studio should have the necessary components and the V8/Chrome development tools are downloaded and in place.

There are some environment variables on which the build process is dependent. These variables could be set within batch files, could be set to be part of the environment for an instance of the command terminal, or set at the system level. I chose to set them at the system level. This was not my first approach. I set them at more local levels initially. But several times when I needed to open a new command terminal, I forgot to apply them, and just found it easier to set them globally.

ENVIRONMENT VARIABLEVALUE
DEPOT_TOOLS_WIN_TOOLCHAIN0
vs2022_installC:\Program Files\Microsoft Visual Studio\2022\Community
PATHc:\shares\projects\google\depot_tools\;%PATH%
Environment Variables that must be set

From here on, we will be using the command prompt, and not PowerShell. This is because some of the commands that are part of Google’s tools are batch files that only run properly in the command prompt.

From the command terminal, run the command gclient. This will initialize the Google Tools. Next, navigate to the folder in which you want the v8 code to download. For me, this will be c:\shares\projects\google. The download process will automatically make a subfolder named v8. Run the following command.

fetch --nohistory v8

This command can take a while to complete. After it completes you will have a new directory named v8 that contains the source code. Navigate to that directory.

cd v8

The online documentation that I see from Google for v8 is for version 9. I wanted to compiled version 12.0.174.

git checkout 12.0.174

Update 2025 March 7

Reviewing the instructions now, I find that the above command fails. It may be necessary to fetch the labels for the versions with the following commands to get version 13.6.9.

git fetch --tags
git checkout 13.6.9

Today I am trying to only rebuild v8 for Windows. Eventually I’ll rebuild it for ARM64 also. Run the following commands. It will make the build directories and configurations for different targets.

python3 .\tools\dev\v8gen.py x64.release
python3 .\tools\dev\v8gen.py x64.debug
python3 .\tools\dev\v8gen.py arm64.release
python3 .\tools\dev\v8gen.py arm64.debug

The build arguments for each environment are in a file named args.gn. Let’s update the configuration for the x64 debug build. To open the build configuration, type the following.

notepad out.gn\x64.debug\args.gn

This will open the configuration in notepad. Replace the contents with the following.

is_debug = true
target_cpu = "x64"
v8_enable_backtrace = true
v8_enable_slow_dchecks = true
v8_optimized_debug = false
v8_monolithic = true
v8_use_external_startup_data = false
is_component_build = false
is_clang = false

Chances are the only difference between the above and the initial version of the file are from the line v8_monolithic onwards. Save the file. You are ready to start your build. To kick off the build, use the following command.

ninja -C out.gn\x64.debug v8_monolith

Update 2024 September 3 – Compiling this now, I’m encountering a different error. It appears the compilier I’m using takes issues with some of the nested #if directives in the source code. There was in in src/execution/frames.h around line 1274 that was problematic. It involved a line concerning enabling V8 Drumbrake. Nope, I don’t know what that is. This was for a call to DCHECK, which is not used in production builds. I just removed it. I encountered similar errors in src/diagnostics/objects-debug.cc, src\wasm\wasm-objects.cc,

This will also take a while to run, but this will fail. There is a third party component that will fail concerning a line in a file named fmtable.cpp. You’ll have to alter a function to fix the problem. Open the file in the path .\v8\third_party\icu\source\i18n\fmtable.cpp. Around line 59, you will find the following code.

static inline UBool objectEquals(const UObject* a, const UObject* b) {
     // LATER: return *a == *b
     return *((const Measure*)a) == ((const Measure*)b);
}

You’ll need to change it so that it contains the following.

static inline UBool objectEquals(const UObject* a, const UObject* b) {
     // LATER: return *a == *b
     return *((const Measure*)a) == *b;
}

Save the file, and run the build command again. While that’s running, go find something else to do. Have a meal, fly a kite, read a book. You’ve got time. When you return, the build should have been successful.

Hello World

Now, let’s make a hellow world program. Google already has a v8 hellow would example that we can use to see that our build was successful. We will use it for now, as I’ve not discussed anything about the v8 object library yet. Open Microsoft Visual Studio and create a new C++ Console application. Replace te code in the cpp file that it provides with Google’s code.

// Copyright 2015 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include "libplatform/libplatform.h"
#include "v8-context.h"
#include "v8-initialization.h"
#include "v8-isolate.h"
#include "v8-local-handle.h"
#include "v8-primitive.h"
#include "v8-script.h"

int main(int argc, char* argv[]) {
    // Initialize V8.
    v8::V8::InitializeICUDefaultLocation(argv[0]);
    v8::V8::InitializeExternalStartupData(argv[0]);
    std::unique_ptr<v8::Platform> platform = v8::platform::NewDefaultPlatform();
    v8::V8::InitializePlatform(platform.get());
    v8::V8::Initialize();

    // Create a new Isolate and make it the current one.
    v8::Isolate::CreateParams create_params;
    create_params.array_buffer_allocator =
        v8::ArrayBuffer::Allocator::NewDefaultAllocator();
    v8::Isolate* isolate = v8::Isolate::New(create_params);
    {
        v8::Isolate::Scope isolate_scope(isolate);

        // Create a stack-allocated handle scope.
        v8::HandleScope handle_scope(isolate);

        // Create a new context.
        v8::Local<v8::Context> context = v8::Context::New(isolate);

        // Enter the context for compiling and running the hello world script.
        v8::Context::Scope context_scope(context);

        {
            // Create a string containing the JavaScript source code.
            v8::Local<v8::String> source =
                v8::String::NewFromUtf8Literal(isolate, "'Hello' + ', World!'");

            // Compile the source code.
            v8::Local<v8::Script> script =
                v8::Script::Compile(context, source).ToLocalChecked();

            // Run the script to get the result.
            v8::Local<v8::Value> result = script->Run(context).ToLocalChecked();

            // Convert the result to an UTF8 string and print it.
            v8::String::Utf8Value utf8(isolate, result);
            printf("%s\n", *utf8);
        }

        {
            // Use the JavaScript API to generate a WebAssembly module.
            //
            // |bytes| contains the binary format for the following module:
            //
            //     (func (export "add") (param i32 i32) (result i32)
            //       get_local 0
            //       get_local 1
            //       i32.add)
            //
            const char csource[] = R"(
        let bytes = new Uint8Array([
          0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01,
          0x60, 0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x07,
          0x07, 0x01, 0x03, 0x61, 0x64, 0x64, 0x00, 0x00, 0x0a, 0x09, 0x01,
          0x07, 0x00, 0x20, 0x00, 0x20, 0x01, 0x6a, 0x0b
        ]);
        let module = new WebAssembly.Module(bytes);
        let instance = new WebAssembly.Instance(module);
        instance.exports.add(3, 4);
      )";

            // Create a string containing the JavaScript source code.
            v8::Local<v8::String> source =
                v8::String::NewFromUtf8Literal(isolate, csource);

            // Compile the source code.
            v8::Local<v8::Script> script =
                v8::Script::Compile(context, source).ToLocalChecked();

            // Run the script to get the result.
            v8::Local<v8::Value> result = script->Run(context).ToLocalChecked();

            // Convert the result to a uint32 and print it.
            uint32_t number = result->Uint32Value(context).ToChecked();
            printf("3 + 4 = %u\n", number);
        }
    }

    // Dispose the isolate and tear down V8.
    isolate->Dispose();
    v8::V8::Dispose();
    v8::V8::DisposePlatform();
    delete create_params.array_buffer_allocator;
    return 0;
}

If you try to build this now, it will fail. You need to do some configuration. Here is a quick list of the configuration changes. If you don’t understand what to do with these, that’s find. I’ll will walk you through applying them.

VC++ Directories : 
	Include : v8\include
	Library Directories<Debug>: v8\out.gn\x64.debug\obj
	Library Directories<Release>: v8\out.gn\x64.release\obj

C/C++
	Code Generation
		Runtime Library <Debug>: /MTd
		Runtime Library <Release> /Mt
	Preprocessors
		V8_ENABLE_SANDBOX;V8_COMPRESS_POINTERS;_ITERATOR_DEBUG_LEVEL=0;
		
Linker
	Input
		Additional Dependencies: v8_monolith.lib;dbghelp.lib;Winmm.lib;

Right-click on the project file and select “Properties.” From the pane on the left, select VC++ Directories. In the drop-down on the top, select All Configurations. On the right there is a field named Include. Select it, and add the full path to your v8\include directory. For me, this will be c:\shares\projects\google\v8\include. If you build in a different path, it will be different for you. After adding the value, select Apply. You will generally want to press Apply after each field that you’ve changed.

Change the Configuration drop-down at the top to Debug. In the Library Directories entry, add the full path to your v8\out.gn\x64.debug\obj folder and click Apple. Change the Configuration dropdown to Release and in Library Directories add the full path to your v8\out\gn\x64.release\obj folder.

From the pane on the left, expand C/C++ and select Code Generation. On the right, set the Debug value for Runtime Library to /MTd and set the Release value for the field to /Mt.

Change the Configurations option to All and set add the following values to Preprocessors

V8_ENABLE_SANDBOX;V8_COMPRESS_POINTERS;_ITERATOR_DEBUG_LEVEL=0;

Keep the Configurations option on ALL. Expand Linker and select Input. For Additional Dependencies enter v8_monolith.lib;dbghelp.lib;Winmm.lib;

With that entered, press Okay. You should now be able to run the program. It will pass some values to the JavaScript engine to execute and print out the values.

What’s Next

My next set of objectives is to demonstrate how to project a C++ object into JavaScript. I also want to start thinning out the size of these files. On a machine that is using the v8 binaries, the entire build tools are not needed. At the end of the above process the b8 folder has 12 gigs of files. If you copy out only the build files and headers needed for other projects, the file size is reduced to 3 gigs. Further reductions could occur through changing some of the compilation options.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Making a Web Crawler using the Android Web Client

Source Code

Like many others my coworkers and I have been called back to work in the office for part of the week. Returning to the office hasn’t been without its challenges, especially since the environment has substantially changed. At the end of one week I was asked to collect some information on ads served to the browser in certain countries. To gather this information, I used a VPN to browse from a different country and I created a web-crawler using JavaScript and Node. It created a browser instance, followed links starting from a specific set of pages, and kept track of resources that the pages loaded, and download content that was accessed from certain domains. The app worked fine and it collected the information that I needed. On Monday, when I was in the office, I was asked to produce a similar dataset as seen from a different country. I started my software to do this tasks only to find that the network now actively blocks VPN connections.

I thought about driving back home to complete the task, but decided to just make a new web crawler to run from my Android tablet. That’s what I did. I made an app with a WebView and had it to load each one of ty starting pages. For each page that loaded, there were two sets of data that I needed to capture; the resources that the page requested, and the links that were in the page. To retrieve this information, I would need a WebViewClient for the WebView. The WebViewClient is an object with a number of methods that get called that let one intercept or get notifications of what the WebView is doing. I was only concerned with a few methods on this object.

  • onPageFinished – Fires once a page has finished loading
  • onLoadResource – Fires when a page is requesting a resource, such as an image

When a page finishes loading, I grab the links. There is not API specifically for querying the page’s DOM. There is, however, a method on the WebView to execute JavaScript and return the results as a string object. I inject a small function into the page that grabs the links and extract them from the JSON array of strings that comes back. This is the JavaScript.

function extractLinks(){
     var list = Array.from(document.getElementsByTagName('a')); 
     for(var i=0;i<list.length;++i) { 
           list[i] = list[i].href;
     }
     return list;
})()

To execute the JavaScript in the webview, I use the WebView’s evaluateJavascript() method. The method accepts a ValueCallback object. The value is a string of the JSON encoding of the information. I convert that to a String array and save the links. The two references to the dataHandler object are from a class that I defined. The two methods of interest here are LinksExtracted(String[]) and PageLoadComplete(). The LinksExtracted method receives all of the URLs of the links in the page. The dataHandler is responsible for saving those. PageLoadComplete is used to create demarcaction in the data between the pages. Note that this method of capturing links isn’t perfect; it is possible that after a page loads, the page could dynamically adjust the HTML to remove some links and add others. For my application, the result of this apparent oversight is fine.

    override fun onPageFinished(view: WebView?, url: String?) {
        super.onPageFinished(view, url)

        view!!.evaluateJavascript("(function extractLinks(){var list = Array.from(document.getElementsByTagName('a')); for(var i=0;i<list.length;++i) { list[i] = list[i].href};; return list;})()",
            object:ValueCallback<String> {
                override fun onReceiveValue(value: String) {
                    if(value != null && value != "null")
                    {
                        val gson = GsonBuilder().create()
                        val theList = gson.fromJson<ArrayList<String>>(value, object :
                            TypeToken<ArrayList<String>>(){}.type)
                        if(theList != null) {
                            dataHandler.LinksExtracted(theList.toTypedArray());
                        }
                    }
                    dataHandler.PageLoadComplete()
                }
            }
            )
    }

The links are persisted to an SqLite database. To do this, I’ve defined a data class for holding a row of data.

package net.j2i.webcrawler.data
import kotlinx.serialization.Serializable

@Serializable
data class UrlReading(val sessionID:Long=0L, val pageRequestID:Long = 0L, val url:String = "", val timestamp:Long = -1L) {
}

The sessionID will be the same for all values captured during the same run of the program. pageRequestID increments every time a new page loads. urlString contains the information of interest, the URL. And timestamp contains the time at which the URL was captured.

Creation of the database and insertion of data into it fairly plain-vanilla code. I won’t post the code here, but if you would like to see it, it’s on GitHub and can be found through this link: https://github.com/j2inet/sample-webcrawler/blob/main/app/src/main/java/net/j2i/webcrawler/data/UrlReadingDataHelper.kt

When the data is to be extracted, the program will write it to a CSV file with headers. To minimize the memory demand for this, I have a method on the data helper that will write the data as a cursor is reading it.

    fun writeAllRecords(os:OutputStreamWriter):List<UrlReading>  {

        os.write("SessionID, PageRequestID, Timestamp, URL\r\n")

        val readings = mutableListOf<UrlReading>()
        val db = writableDatabase
        val projection = arrayOf(
            BaseColumns._ID,
            UrlReadingsContract.COLUMN_NAME_SESSION_ID,
            UrlReadingsContract.COLUMN_NAME_PAGE_REQUEST_ID,
            UrlReadingsContract.COLUMN_NAME_URL,
            UrlReadingsContract.COLUMN_NAME_TIMESTAMP
        )
        val sortOrder = "${UrlReadingsContract.COLUMN_NAME_TIMESTAMP} ASC"
        val cursor = db.query(
            UrlReadingsContract.TABLE_NAME,
            projection,
            null,
            null,
            null,
            null,
            sortOrder
        )
                    with(cursor) {
            while (moveToNext()) {
                val reading = UrlReading(
                    //source = getString(getColumnIndexOrThrow(BaseColumns._ID)),
                    sessionID = getLong(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_SESSION_ID)),
                    pageRequestID = getLong(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_PAGE_REQUEST_ID)),
                    url = getString(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_URL)),
                    timestamp = getLong(getColumnIndexOrThrow(UrlReadingsContract.COLUMN_NAME_URL)),
                );

                val line = "${reading.sessionID}; ${reading.pageRequestID}; ${reading.timestamp}, ${reading.url}\r\n";
                os.write(line);
                readings.add(reading)
            }
        }
        return readings
    }

The program keeps track of the URLs that it has found links for and ads them to a list. When going to the next page, it randomly selects from this list (and removes the item selected). However, the program will first visit all of the initial set of URLs that it was given before randomly selecting. If I don’t do this, then the links found on the first page loaded might result in the other initial set of pages not being visited or not having a chance of having as much of an impact in the pages visited. Those initial URLs are added to the list and a count of the URLs is saved.

        UrlList.add("https://msn.com")
        UrlList.add("https://yahoo.com");
        linearLoadCount = UrlList.count()

The method for loading random URLs initially dequeues URLs from the beginning of the list. After all of the intial URLs have been read, random reads occur.

    fun openRandomSite() {
        var index = 0;
        if(linearLoadCount>0) {
            --linearLoadCount
            var index = random.nextInt(UrlList.count())
        }
        val nextUrl = UrlList[index];
        UrlList.removeAt(index);
        mainWebView!!.loadUrl(nextUrl)
    }

To keep the pages cycling, in the PageLoadComplete()handler the next call to load a random page is queue (with a delay).

            override fun PageLoadComplete() {
                ++pageSessionID;
                mainHandler.postDelayed(object:Runnable {
                    override fun run() {
                        openRandomSite()
                    }
                },NAVIGATE_DELAY)
            }

It took less time to write this than it would have to drive home. The initial set of URLs in the code are in the source code. This was written to only be used once, so I skipped practices that would have made the program of more general utility. Nevertheless, I think it might be useful to someone. You can find the complete source code on GitHub.

https://github.com/j2inet/sample-webcrawler


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

USA Testing Emergency Alert System on 4 October 2023 around 2:20 pm

On 3 October 2023 around 2:20PM, the USA is testing its emergency alert system. The test will be broadcast over radio (including TV) and mobile phone. Expect phones to be blaring around you around this time. Don’t worry, this is only a test.

If you are likely to be in a situation where you cannot afford or tolerate your phone going off, then you might want to keep your phone powered off around this time. Some environments, such as courthouses, have rules on phones being in silent mode or turned off (I believe a phone going off in court in Atlanta can get someone in trouble for contempt of court). Even if you’ve muted all your settings on your phone, this alert might not respect those settings. While some phones expose settings to silence other alerts, the national alert system’s setting has been unalterable on the phones that I’ve examined over the years.

When the test goes off, don’t be alarmed. If you have one of those emergency tests radios, it might be a good opportunity to see how well it works.

Updating your Profiles in Cisco VPN Connect (MacOS)

Some years ago I worked with a client and had to install Cisco VPN Connect on my Mac. After the work was done, I uninstalled the client. Recently, I found myself needing the VPN with a different client. On reinstalling the software, all of the old settings from the previous client were still there and the VPN software refused to save the new connection URL. To get the client to work the way I needed, I had to update the profile manually.

One of the places where the Cisco Anytime Connect software saves information is /opt/cisco/anyconnect/profile. Navigating to that path in Terminal you will find a couple of files. The one of interest is Anyconnect-SAML.xml. This is an XML file that contains the connection settings. In addition to this file the software also remembers the last connection that it attempted to connect to. I don’t know where that information is stored, but that information won’t be needed for this change. The simplest way to address the connection problem is to rename this file. I say “rename” and not “delete” so that the information is available should you need it. Renaming has the same effect as deleting, but allows you to rollback. I changed the file to a name that had .backup on the end.

With the file effectively deleted, if you restart the Cisco Anytime VPN software, it will still show the last server that you connected to. Enter your new VPN URL and connect. After successfully connecting, the software will remember this URL and make it available the next time that you need to connect.

Customizing the Logitech/Saitek Flight Instrument Panel

Saitek (which was later acquired my Logitech) created flight instrument hardware that is primarily associated with Microsoft Flight Simulator. While there are various device types that they make, the one in which I had the most interest is the “Flight Instrument Panel.” It is a small LCD display that connects to the computer via a USB connector. It doesn’t appear that Logitech has made any changes to the hardware since it’s release; the device still uses a mini-USB connector.

I have some purposes for it beyond using it for Microsoft Flight Simulator. I wanted to perform some customization on the pane. After going through the setup, the panel begin to display information. By default it displays promotional information for other hardware until an application tells it to display something else. I’m not fond of advertisements on my idle devices and wanted to change these first. Thankfully this can be done without any programming. The default displays images are from jpg files that can be found in the file system after the device is setup. Navigate to C:\Program Files\Logitech\DirectOutput to see the files. Replace any one of them to alter what the screen displays.

Before purchasing a panel I searched for an SDK for it. I didn’t find an SDK, but I found that plenty of other people had software projects for it and figured I would be able to make it work. Only after getting the device setup did I find that the SDK was closer than I realized. Documentation for controlling the panel installs along side the panel. The group of APIs in the SDK are referred to as DirectOutput. No, that’s not one of Microsoft’s DirectX APIs (Like Direct3D, DirectInput, so on). That’s just the name Saitek selected for their SDK.

  1. The Application Directory
  2. The System Directory
  3. The Windows Directory
  4. Current Directory
  5. Directorys in the PATH environment variable

If the target DLL isn’t in one of those folders, it won’t be found. There is a Win32 function that let’s an application set an additional folder in which the system will look for resolving a DLL location at runtime. The function has the signature HRESULT SetDllDirectory(LPWSTR pathname). When this method is called with a valid path the new search path is as follows.

  1. The Application Directory
  2. The Directory passed in SetDllDirectory()
  3. The System Directory
  4. The Windows Directory
  5. The Current Directory
  6. Directories in the PATH environment variable

The statement for adding a declaration for SetDllDirectory follows.

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool SetDllDirectory(string lpPathName);

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Setting a DLL Path at Runtime for P/Invoke

.Net applications can call functions from static DLLs using the [DllImport] attribute. This attribute has as its argument the name of the DLL in which the target is store. But what does one do if the location of the DLL is not in the paths that the system will search? First, let’s consider where the system looks for DLLs in the order that it searches for them.

  1. The Application Directory
  2. The System Directory
  3. The Windows Directory
  4. Current Directory
  5. Directorys in the PATH environment variable

If the target DLL isn’t in one of those folders, it won’t be found. There is a Win32 function that let’s an application set an additional folder in which the system will look for resolving a DLL location at runtime. The function has the signature HRESULT SetDllDirectory(LPWSTR pathname). When this method is called with a valid path the new search path is as follows.

  1. The Application Directory
  2. The Directory passed in SetDllDirectory()
  3. The System Directory
  4. The Windows Directory
  5. The Current Directory
  6. Directories in the PATH environment variable

The statement for adding a declaration for SetDllDirectory follows.

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool SetDllDirectory(string lpPathName);

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Erasing an EPROM with Alternative Devices

I’ve come into possession of an EPROM and got a programmer for it. Writing data to it was easy. Erasing data is another matter. Note that I said EPROM and not EEPROM. What’s the difference? An The first E in EEPROM means “Electrically.” And Electrically Erasable Read Only Memory can be cleared by using some electric circuit. The EPROM I have must be erased through UV light. There is a window on the ceramic package that exposes the silicon underneath. With enough UV light through this window, this chip should be erased.

There are devices sold to specifically erase such memory. I’m not using those. Instead, I have a number of other UV sources to test with. These are

  • The Sun
  • A portable UV phone Cleaner
  • A Clamshell UV Phone Cleaner
  • A Tube Blacklight

I’m using a M27C256 32k EPROM. To know whether my attempt at erasing worked or not I needed to first put something on it. I filled the memory with binary digits counting from 0 to 255, repeating the sequence when I reached the end. The entire 32K was filled with this pattern. To produce a file with the pattern I wrote a few lines of code.

// See https://aka.ms/new-console-template for more information
byte[] buffer = new byte[0x7FFF];
for(int i = 0;i <buffer.Length;i++)
{
    buffer[i] = (byte)i;
}
using (FileStream fs = new FileStream("content.bin", FileMode.Create, FileAccess.Write))
{
    fs.Write(buffer, 0, buffer. Length);
}

Now to get the resultant file copied to the EPROM. The easiest way to do that is with a dedicated EPROM programmer. They are relatively cheap, easy to find, and versatile. I found one on Amazon that worked well for me. Using it was only a matter of selecting what type of EPROM I was using, selecting a file containing the content to be written, and selectin the program button.

The software for writing information to the EPROMs

Reading from the EPROM is just as simple. After the EPROM is connected to the programmer and the EPROM model is selected in the software, it provides a READ button that copies all the bytes from the memory device and displays them in the hex editor. To determine whether the EPROM had been erased I will use this functionality. Now that I have a way to read and write from the EPROM, let’s test the different means of erasure.

Using the Sun

These results were the most disappointing. After having an EPROM out for most of the day, the ROM was not erased. Speaking to someone else, I was told that it would take several days of exposure to erase the EPROM. I chose not to leave the EPROM out for this long, as I’d risk forgetting it was out there when the weather becomes more wet.

Using a Portable UV Sanitizer

The portable UV Sanitizer that I tried was received as a Christmas gift at the end of 2022. Such devices are widely available now in the wake of COVID. This unit charges with a USB cable and runs off of a battery. When turned on, it stays on until it is either turned off, the battery goes dead, or someone turns it over. This unit will only emit light when the light is facing downward. I speculate this is a safety feature; you won’t want to look directly into the EV light.

My first attempts to erase one of the EPROMs with this sanitizer were not successful. After several sessions, the EPROMs still had their data on them. While I wouldn’t look directly into the UV light I could point my camera at it safely. The picture was informative. The light had a brighter level on the end that was closer to the power source, and was very dim at the end. Before, I was only ensuring the window of the EPROM were under some portion of the lighting tube. Now, I knew to ensure it was close to the brighter end of the UV emitter. Using the new placement, I was able to erase an EPROM in about 60 minutes.

UV Sanitizer with the EPROM at the brighter end.

Provided that someone is only erasing a single EPROM and isn’t in a hurry, I think that this could make for an adequate solution for erasing an EPROM. If there’s more than one though his might not work as well, especially when one considers the time needed to recharge the battery after it has been diminished by an erasing session.

Clamshell UV Phone Cleaner

I received this clamshell UV phone cleaner as a gift nearly a decade ago. This specific model isn’t sold any more, but newer variations are available under the description PhoneSoap. These have a few advantages over the portable UV sanitizer. It runs from a 12 volt power source. There’s no waiting for it to recharge before you can use it. It also appears to be a lot brighter. The UV emitter automatically deactivates when the case is being opened, but there is a brief moment where the case is just being opened but the light hasn’t turned off yet in which some of the light spills out of the unit. It is either a lot brighter, or it has more light in the visible spectrum. The unit I use has emitters on both the hinged and the lower area of the case. EPROMs placed in it could be oriented face-up or face-down and still be erased. When this case is closed, the emitter turns on for 300 seconds and then turns off. I’d like for it to be longer for my purposes, but 300 seconds isn’t bad. After I let an EPROM sit for one 5-minute session in the sanitizer, it still has data on it. But after a second 5-minute session it showed as erased. I think this unit is worthy of consideration.

Tube UV Light

I have an old UV tube light that I purchased in my teens. I dug it up and found a power supply for it. The light still works, but after leaving an EPROM in direct contact with it for well over 24 hours I found no change. I speculated that this would be the outcome for a few reasons. Among which is that UV lights of this type are commonly where people can see them. The cleaning UV lights have warnings to keep them away from skin and eyes. From the glimpse that I got of them through the phone’s camera, it looks that they are working in a different wavelength. Not that this is a true measure of the true bandwidth. But there’s not much to be said about the tube light.

The Winner

The clear winner here is the clamshell UV light. It was easy to use and was able to erase the EPROM in ten minutes. The portable UV cleaner comes in second. The other sources didn’t cross the finish line given a generous amount of time to do so. It might be possible to eventually erase an EPROM with them, but I don’t think it is worth the time.

Now that I have a reliable way to erase these EPROMs, I can use these in the MC6800 Computer that I was working on.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Jameco Valuepro BB-4T7D 3220-Point Solderless Breadboard