Kerbal Space Program 2 Available February 23, 2023

I don’t write much about games (though I am taking that into consideration), but I thought this game demands a mention. Kerbal Space Program is a game series that is about running your own space program. The game has an especially flexible system for designing and launching vehicles for travel on land, sea, air, and space. Unlike many other games, Kerbal Space Program uses a physics system that follows a lot of concepts in astrodynamics. If you ever wanted to learn about orbital mechanics, KSP is a great testing ground for learning. KSP places the player in a scaled-down solar system to explore.

KSP 2 builds upon KSP with additional customization, UI updates, and adds interstellar travel and better support for building space bases and colonies on other planets. In later releases of the game, the game’s maker plans to add support for multiplayer. The game is going to be released as a preview. This is similar to the path that the original KSP took, with frequent updates based on the feedback from the players.

Humble Bundle Developer Book Offer

Humble Bundle is known for offering games where you can decide on the price that you pay, and the money goes to a charity. One of the offers that they have available now is for a collection of 27 books from Packt Publishing on software development and related topics. For donating 18 USD the entire collection of 27 books is available. If you donate less than this, then a subset of the books are available to you. For 10 USD, a ten item bundle is available. For 1 USD, a 3 item bundle is available. (Note, the books available for the diminished selections are preselected and cannot be changed). Many of the available books are in the topic domains of C++ and Java and range from beginning to advanced. There are a few books on Python, Go, and discrete mathematics. At the time of this posting, the “Programming Mega Bundle” is available for another 14 days. The books are available in PDF, ePUB, and MOBI formats.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

DART Mission Successful

In the interest of ensuring humanity doesn’t follow the pathway of the dinosaurs, NASA engaged in a mission known as DART recently. The purpose of the DART mission was to determine if it was possible to change the trajectory of an asteroid to prevent it from impacting earth. The asteroid chosen wasn’t endangering earth and was selected only for testing. The pair of asteroids observed are named Didymos and Dimorphus. Didymous completed orbit around Dimorphus every 11 hours and 23 minutes. If NASA successfully affected the trajectory of Didymous, they expected it to alter the orbit by about 10 minutes. The effect that the impact had was that it altered the orbital period by 32 minutes. This makes for the first time that humans have altered the orbit of a celestial body.

The orbit was altered by impacting a space vehicle into the asteroid at a speed of 22,530 kilometers per hour. Of course this destroyed the spacecraft itself. Though the mission was successful, the observations are ongoing. In another 4 years, the ESA (European Space Agency) has a fly-by planned to collect more information.

In the event of a threatening asteroid, the expectation on how it will be altered is that if it is discovered early enough, that an impactor flying into it could alter its trajectory enough so that it is not a threat to life here on earth.

References:

CNN
NPR
Fox News


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Samsung Developer Conference 2022

Wednesday, Samsung held its 2022 developer conference. A standout attribute of this conference is they invited people to attend in person; something I’ve not really seen in developer conferences since 2019 (for obvious reasons🦠). Ofcourse, for those that cannot attend, many aspects of the conference were also streamed from https://samsungdeveloperconference.com and from their YouTube channel (https://www.youtube.com/c/SamsungDevelopers ).

Concerning the content, the conference felt a bit heavier on items of consumer interest. The keynote highlighted Know Matrix, Samsung’s block-chain based solution for security among their devices (not just phones), Samsung TV Plus, Gaming, Tizen, and more.

The sessions for the conference were available either as prerecorded presentations, or live sessions. The prerecorded sessions were made available all at once .

Android

In addition to making updates to their interface (One UI, coming to the S2022 series at the end of the month) Samsung is adding a Task Bar to the Tab S8 and their foldable phones. Samsung also covered support for multitasking; Samsung’s phones support running 2 or 3 applications simultaneously. Many of the multitasking features use standard Android APIs. Samsung has also made available a task bar on their larger screen devices (tablets, foldable phones) to enable switching applications without going to the home screen or task switcher. There ar multiple levels of support that an application could have for multi-window capable devices. One is simply supporting the window being resized. FLAG_ACTIVITY_LAUNCH_ADJACENT indicates that an application was designed for a multi-window environment. New interactions enabled by multi-window applications includes drag-and-drop from one instance to another, multiple instances of an application, and supporting “flex mode” (where either side of a foldable device is used for different purposes).

Some well-known applications already support features for these environments, including Gmail, Facebook, Microsoft Outlook, and Tik-Tok.

Presentations

Multitasking Experiences
LE Wireless Audio

Tizen

It’s been 10 years since Tizen was released in 2012. In previous years, has presented Tizen as its operating system for a wide range of devices. The OS could be found running on some cameras, phones, TVs, and wearables. The Tizen OS got a great footing in TVs; you’ll find it on all of the Samsung TVs available now above a certain size, some computer monitors, and a few TVs from other manufacturers. Its presence on other devices has also diminished, with Samsung’s wearables now using Android Wear and the Tizen phones being out of production. I encountered some of the “Tizen Everywhere” marketing, but it now appears to refer to the wide range of displays that use Tizen.

One of Samsung’s presentations concerning Tizen had its own timeline of Tizen’s evolution. I might make my own, since I’ve been interested since it was in its proto-version (Bada). Samsung announced Tizen 7.0. The features highlighted in the release were in the areas of

  • OpenXR runtime
  • Real-time Kernel
  • 3D Rendering enhancements
  • Android HAL support
  • Cross-platform improcement
  • Natural User Interface Enhancements

I personally found the natural user interface enhancements to be interesting. It included a lot of AI driven features. Support for running Tizen applications on Android was also mentioned. I’m curious as to what this means though. If typical Samsung Android devices can run Tizen, then it gives the OS new relevance and increases the strength of the “Tizen Everywhere” label. Tizen has been updated to use more recent Chromium release for its Web engine. Tizen also has support for Flutter. Support was actually released last year. But compatibility and performance are increased with Tizen 7.0.

Samsung has also exposed more Native SDKs in Tizen 7.0 to C# and C from other SDKs. For .Net developers, Tizen 7.0 has increased MAUI support.

Presentations

What’s new in Tizen
Tizen Everywhere

Samsung TV Plus

This is Samsung’s IPTV service. It is integrated into the TV in such a way that it is indistinguishable from OTA channels. Entities interested in the services that this has to offer are most likely Advertisers. Samsung provided information on both making available one’s video content on Samsung TV and how to monetize it. While I don’t see myself as one that would be implementing features related to this, I did find the presentation interesting. Before a show airs (about 5 minutes before) the ad slots are available to advertisers to fill. The ad inventory is auctioned off.

Presentations

Samsung TV Plus
Home Connectivity Alliance

Gaming

The TVs support being paired with a Bluetooth controller and streaming games through the Samsung Gaming Hub. HTML-based games are served to the phone via what Samsung calls Instant Play. Samsung also showed off the features it’s made available for emersive audio within gaming environments.

Presentations

Dolby Atmos with Games
Immersive Experiences on Big Screens

Health

Samsung says they worked with Google to come up with a single set of APIs that developers can use for health apps. Often times, Samsung begins developing for some set of hardware features and later Samsung and Google normalize the way of interacting with those features. I thought these sessions would be all about Samsung Health (the application that lets you log your health stats on the phones). But the development also included their large screen (TV) interfaces with enhancements for tele-health visits. Collection of health related data has been enhanced on the Galaxy Watch 5.One of the enhancements is a microcontroller dedicated to collecting health data while the CPU sleeps. This allows the watch to collect information with less demands on the battery. The new watch is also able to measure body composition through electrical signals.

Presentations

TeleHealth in Samsung Devices
Expand Health Experiences with Galaxy Watch

IoT

Samsung’s SmartThings network now also includes the ability to find other devices and even communicate data to those devices. Like other finding networks, their solution is based on devices being able to communicate with each other. Devices can send two bytes of data through the network. How this two bytes is used it up to the device. 2 bytes isn’t a lot. But it still could be of utility, such as a device sending a desired temperature to a thermostat, or another device simply signaling “I’m home.”

Presentations

SmartThings FindMy
Home Connectivity Alliance

Other Sessions

There were plenty of other topic areas covered. I’ve only highlighted a few areas. If you would like to see the presentations for yourself visit the YouTube Channel or see the Samsung Developer’s Conference page.


Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Image Maps Made for Creatives

Many of the people with which I work are classified as being technical or creative (though there is a spectrum between these classifications). On many projects, the creative workers design UIs while the technical people transform those designs into something that is working. I’m a proponent of empowering those that are creating a design with the ability to implement it. This is especially preferred on projects where a design will go through several iterations.

I was recently working on a project for which there would be a menu with a map of a building. Clicking on a room in the map would take the user to web page that had information on the room. I had expected the rooms on the map to generally be rectangular. When I received the map, I found that many of the rooms had irregular shapes. HTML does provide a solution for defining shapes within the image that are clickable through Image Maps. I’ve never been a fan of those, and for this specific project I would not be able to ask the creatives to update the image map. I decided on a different solution. I can’t show the picture of the map that was the image being displayed. As an example, I’ll use a picture of some lenses that are sitting in the corner of my room.

Collection of Lenses

Let’s say I wanted someone to be able to click on a lens and get information about them. In this picture, these lenses overlap. Defining rectangular regions isn’t sufficient. I opened the picture in a paint program and applied color in a layer over the objects of interest. Each color is associated with a different object classification. Image editing isn’t my skill though. The result looks rough, but sufficient. This second image will be used in an HTML page to figure out which object that someone has clicked on. I’ll have a mapping of these color codes to objects.

When a user clicks on the real image, the pixel color data is extracted from the associated image map and converted to a hex string. To extract the pixel data, the image map is rendered to a canvas off-screen. The canvas’s context exposes methods for accessing the pixel data. The following code renders the image map to a canvas and sets a variable containing the canvas 2D context.

function prepareMap(width, height) {
    var imageMap = document.getElementById('target-map');
    var canvas = document.createElement('canvas');
    canvas.width = width;
    canvas.height = height;
    var canvasContext = canvas.getContext('2d');
    canvasContext.drawImage(imageMap, 0, 0, imageMap.width, imageMap.height);
    areaMapContext = canvasContext;
}

I need to know the position of the image relative to the browser’s client area. To retrieve that information, I have a method that recurses through the positioning containers for the image and accumulates the positioning settings to a usable set of coordinates.

function FindPosition(oElementArg) {
    if (oElementArg == undefined)
        return [0, 0];
    var oElement = oElementArg;
    if (typeof (oElement.offsetParent) != "undefined") {
        for (var posX = 0, posY = 0; oElement; oElement = (oElement.offsetParent)) {
            posX += oElement.offsetLeft;
            posY += oElement.offsetTop;
        }
        return [posX, posY];
    }
    return [0.0];
}

The overall flow of what happens during a click is defined within mapClick in the example code. To convert the coordinates on which someone clicked (relative to the body of the document) to coordinates relative to the image, I only need to subtract the offsets that are returned by the FindPosition function. The retrieved colorcode for the area on which the user clicked can be used as an indexer on the color code to product identifier mapping. The product identifier is used as a indexer on the product identifier to product data mapping.

function mapClick(e) {
    var PosX = e.pageX;
    var PosY = e.pageY;
    var position = FindPosition(targetImage);
    var readX = PosX - position[0];
    var readY = PosY - position[1];

    if (!areaMapContext) {
        prepareMap(targetImage.width, targetImage.height);
    }
    var pixelData = areaMapContext.getImageData(readX, readY, 1, 1).data;
    var newState = getStateForColor(pixelData[0], pixelData[1], pixelData[2]);
    var selectedProduct = productData[newState];
    showProduct(selectedProduct);
}

Once could simplify the mappings by having the color data map directly to product information. I chose to keep the two separated though. If the color scheme were ever changed (which I think is very possible for a number of reasons) I thought it better that these two items of data be decoupled from each other.

You can find the full source code for this post on GitHub at this url. Because of security restrictions in the browser, you must run this code within a local HTTP server. Attempting to run it from the file system will fail due to limitations in how an application can use the data it loads when loaded from a local file. I also have brief videos on my social media account to walk through the code.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet

Stadia Cancelled

Recently, Google announced that it will be bringing Stadia to an end. Stadia was Google’s game streaming service. With the purchase of a controller and a Chromecast, users could take advantage of GPUs in the cloud to play their games. To use the service, there was both a monthly fee for access and games had to be individually purchased. The cancellation comes at a time closely following reports that Sundar Pichai told employees of plans for cost cuts to make Google more profitable[src]. Google also released a blog post [src] stating that the service didn’t get the traction they had hoped.

A few years ago, we also launched a consumer gaming service, Stadia. And while Stadia’s approach to streaming games for consumers was built on a strong technology foundation, it hasn’t gained the traction with users that we expected so we’ve made the difficult decision to begin winding down our Stadia streaming service.

I’m in possession of a number of Stadia units myself. Speaking only for myself, it didn’t have faith in the product. I carry skepticism of online-only games, having lost games to smaller incidents before. One of the first XBox 360 games I had purchased online became inaccessible to my other Xbox’s when the publisher left the store. I’ve had other games that installed locally that have become less functional or completely non-functional because a server was taken offline. When I do purchase an app or a game, I do so knowing that the app could become unusable and unavailable. I don’t mind paying small amounts for what I see as temporary access to a game or app. But paying $60 for a game that could essentially evaporate out of my account was not comfortable for me. Nor was the ongoing 10 USD/month I’d pay for access to the game.

I don’t know why other people might not have jumped on board. Especially with the component shortage making the acquisition of Xbox and PlayStation challenging. Those units have been out for two years and we still are not in a place where someone can walk into a retail store and have high confidence that there will be a unit on the shelf to pickup and purchase.

What does this all mean for those that may have purchased games? The outcome financially is most favourable. They are getting a refund from Google.

We’re grateful to the dedicated Stadia players that have been with us from the start. We will be refunding all Stadia hardware purchases made through the Google Store, and all game and add-on content purchases made through the Stadia store. Players will continue to have access to their games library and play through January 18, 2023 so they can complete final play sessions. We expect to have the majority of refunds completed by mid-January, 2023. We have more details for players on this process on our Help Center.

The Chromecasts of course are still very usable and functional. The controllers themselves are, as far as I can tell, e-waste now. Out of curiosity, I looked up the Stadia in the Google Store. It is still listed on the store with no ability to make a purchase.

I very much wish that Google would release the source code or some other information so that the community could make the controllers useful. But since they are giving full refunds, I don’t think that they will be doing much more. The only things that people might have lost is their saved games (for games that do not support progress cross play, as Destiny did). According to a post on Reddit, Google did acknowledge the desire for the controllers to remain useful after the shutdown. But no promises were made [src].


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Twitter: @j2inet
Instagram: @j2inet
Facebook: j2inet
YouTube: j2inet
Telegram: j2inet